Jan 20 14:50:03 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 14:50:03 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:03 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 14:50:04 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 20 14:50:04 crc kubenswrapper[4949]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.562419 4949 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568305 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568337 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568347 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568356 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568367 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568375 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568383 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568391 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568399 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568407 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568427 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568436 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568443 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568451 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568458 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568466 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568477 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568487 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568497 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568506 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568514 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568547 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568557 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568566 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568575 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568583 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568593 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568602 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568609 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568620 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568629 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568638 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568646 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568655 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568663 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568671 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568680 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568688 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568695 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568703 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568711 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568718 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568726 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568734 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568744 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568753 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568762 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568770 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568778 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568785 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568793 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568800 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568808 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568816 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568824 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568832 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568839 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568847 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568855 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568862 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568870 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568878 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568885 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568894 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568901 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568911 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568918 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568926 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568934 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568942 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.568949 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569341 4949 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569361 4949 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569376 4949 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569387 4949 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569399 4949 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569408 4949 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569420 4949 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569432 4949 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569442 4949 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569451 4949 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569460 4949 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569472 4949 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569481 4949 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569491 4949 flags.go:64] FLAG: --cgroup-root="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569500 4949 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569510 4949 flags.go:64] FLAG: --client-ca-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569547 4949 flags.go:64] FLAG: --cloud-config="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569556 4949 flags.go:64] FLAG: --cloud-provider="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569565 4949 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569577 4949 flags.go:64] FLAG: --cluster-domain="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569586 4949 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569595 4949 flags.go:64] FLAG: --config-dir="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569604 4949 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569614 4949 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569626 4949 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569635 4949 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569644 4949 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569654 4949 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569663 4949 flags.go:64] FLAG: --contention-profiling="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569672 4949 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569681 4949 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569691 4949 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569700 4949 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569710 4949 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569719 4949 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569728 4949 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569737 4949 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569747 4949 flags.go:64] FLAG: --enable-server="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569756 4949 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569767 4949 flags.go:64] FLAG: --event-burst="100" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569776 4949 flags.go:64] FLAG: --event-qps="50" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569785 4949 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569794 4949 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569802 4949 flags.go:64] FLAG: --eviction-hard="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569813 4949 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569822 4949 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569831 4949 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569842 4949 flags.go:64] FLAG: --eviction-soft="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569852 4949 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569861 4949 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569870 4949 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569879 4949 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569888 4949 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569896 4949 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569905 4949 flags.go:64] FLAG: --feature-gates="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569916 4949 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569925 4949 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569934 4949 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569943 4949 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569952 4949 flags.go:64] FLAG: --healthz-port="10248" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569961 4949 flags.go:64] FLAG: --help="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569970 4949 flags.go:64] FLAG: --hostname-override="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569979 4949 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569988 4949 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.569997 4949 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570006 4949 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570016 4949 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570025 4949 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570035 4949 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570045 4949 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570054 4949 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570063 4949 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570072 4949 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570081 4949 flags.go:64] FLAG: --kube-reserved="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570090 4949 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570099 4949 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570108 4949 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570117 4949 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570126 4949 flags.go:64] FLAG: --lock-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570136 4949 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570145 4949 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570155 4949 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570168 4949 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570177 4949 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570186 4949 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570195 4949 flags.go:64] FLAG: --logging-format="text" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570204 4949 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570214 4949 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570222 4949 flags.go:64] FLAG: --manifest-url="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570231 4949 flags.go:64] FLAG: --manifest-url-header="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570242 4949 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570251 4949 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570262 4949 flags.go:64] FLAG: --max-pods="110" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570271 4949 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570280 4949 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570289 4949 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570298 4949 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570307 4949 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570316 4949 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570325 4949 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570344 4949 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570354 4949 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570363 4949 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570372 4949 flags.go:64] FLAG: --pod-cidr="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570381 4949 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570393 4949 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570402 4949 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570412 4949 flags.go:64] FLAG: --pods-per-core="0" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570421 4949 flags.go:64] FLAG: --port="10250" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570430 4949 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570439 4949 flags.go:64] FLAG: --provider-id="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570448 4949 flags.go:64] FLAG: --qos-reserved="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570457 4949 flags.go:64] FLAG: --read-only-port="10255" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570466 4949 flags.go:64] FLAG: --register-node="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570475 4949 flags.go:64] FLAG: --register-schedulable="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570484 4949 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570499 4949 flags.go:64] FLAG: --registry-burst="10" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570508 4949 flags.go:64] FLAG: --registry-qps="5" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570547 4949 flags.go:64] FLAG: --reserved-cpus="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570557 4949 flags.go:64] FLAG: --reserved-memory="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570569 4949 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570578 4949 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570587 4949 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570596 4949 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570605 4949 flags.go:64] FLAG: --runonce="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570614 4949 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570623 4949 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570632 4949 flags.go:64] FLAG: --seccomp-default="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570640 4949 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570650 4949 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570659 4949 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570668 4949 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570677 4949 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570686 4949 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570695 4949 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570703 4949 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570712 4949 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570721 4949 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570730 4949 flags.go:64] FLAG: --system-cgroups="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570739 4949 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570753 4949 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570761 4949 flags.go:64] FLAG: --tls-cert-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570770 4949 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570782 4949 flags.go:64] FLAG: --tls-min-version="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570792 4949 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570801 4949 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570810 4949 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570819 4949 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570828 4949 flags.go:64] FLAG: --v="2" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570839 4949 flags.go:64] FLAG: --version="false" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570850 4949 flags.go:64] FLAG: --vmodule="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570861 4949 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.570870 4949 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571100 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571112 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571122 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571131 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571140 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571150 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571159 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571167 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571175 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571183 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571192 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571200 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571208 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571215 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571223 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571231 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571239 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571247 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571254 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571262 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571270 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571278 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571286 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571299 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571307 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571318 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571327 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571336 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571344 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571353 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571363 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571373 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571383 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571392 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571404 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571412 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571420 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571429 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571438 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571446 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571454 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571463 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571473 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571483 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571492 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571500 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571509 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571540 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571548 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571556 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571566 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571574 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571583 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571591 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571599 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571609 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571617 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571624 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571633 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571641 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571648 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571656 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571663 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571671 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571679 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571686 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571697 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571704 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571712 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571720 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.571728 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.572011 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.583695 4949 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.583741 4949 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583869 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583886 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583896 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583908 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583917 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583926 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583934 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583943 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583952 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583959 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583967 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583975 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583983 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.583992 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584000 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584008 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584015 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584023 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584031 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584042 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584055 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584065 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584075 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584085 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584096 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584105 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584113 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584122 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584130 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584139 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584148 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584156 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584166 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584177 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584186 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584195 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584203 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584212 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584220 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584228 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584236 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584244 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584252 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584259 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584267 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584275 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584283 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584292 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584300 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584308 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584316 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584323 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584331 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584339 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584347 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584356 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584364 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584372 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584379 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584387 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584396 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584404 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584412 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584420 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584428 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584436 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584444 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584451 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584459 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584468 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584476 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.584489 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584785 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584801 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584811 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584820 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584828 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584836 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584844 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584852 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584861 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584869 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584876 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584884 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584893 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584901 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584913 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584922 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584931 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584939 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584947 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584955 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584963 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584971 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584979 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584988 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.584995 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585003 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585011 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585018 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585026 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585034 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585042 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585049 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585057 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585064 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585072 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585080 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585088 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585095 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585103 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585112 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585120 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585128 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585135 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585143 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585151 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585158 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585167 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585174 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585182 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585190 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585198 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585205 4949 feature_gate.go:330] unrecognized feature gate: Example Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585213 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585221 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585228 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585237 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585246 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585256 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585263 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585271 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585279 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585287 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585295 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585302 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585314 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585324 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585334 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585344 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585352 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585362 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.585370 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.585383 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.585639 4949 server.go:940] "Client rotation is on, will bootstrap in background" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.591481 4949 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.591656 4949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.592588 4949 server.go:997] "Starting client certificate rotation" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.592633 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.592901 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-10 07:27:22.631201534 +0000 UTC Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.593044 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.607392 4949 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.609638 4949 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.610381 4949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.622694 4949 log.go:25] "Validated CRI v1 runtime API" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.656664 4949 log.go:25] "Validated CRI v1 image API" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.659720 4949 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.663444 4949 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-20-14-45-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.663909 4949 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.692842 4949 manager.go:217] Machine: {Timestamp:2026-01-20 14:50:04.69092279 +0000 UTC m=+0.500753688 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3efd1f11-fa35-4658-a27c-ab73770bda97 BootID:18da5c89-38cf-46f2-855c-9ee31684d8b7 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:72:be:e5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:72:be:e5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f3:4c:a5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ad:0a:0a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8e:ff:ed Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:54:6a:84 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:8d:f8:64 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:3c:cb:a1:94:bc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:23:a1:07:12:b9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.693147 4949 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.693430 4949 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.694778 4949 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695144 4949 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695198 4949 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695567 4949 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695592 4949 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695927 4949 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.695992 4949 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.696290 4949 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.696864 4949 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697881 4949 kubelet.go:418] "Attempting to sync node with API server" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697913 4949 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697940 4949 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697961 4949 kubelet.go:324] "Adding apiserver pod source" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.697978 4949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.700090 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.700176 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.700217 4949 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.700352 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.700463 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.700778 4949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.705875 4949 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706857 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706905 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706925 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.706943 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707019 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707036 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707061 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707084 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707101 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707114 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707152 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.707165 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.709403 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.710164 4949 server.go:1280] "Started kubelet" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.711265 4949 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.711353 4949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712131 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712749 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712780 4949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713012 4949 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713028 4949 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713043 4949 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.713131 4949 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.712974 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:28:53.501819622 +0000 UTC Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.714742 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.714790 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.715189 4949 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.715761 4949 factory.go:55] Registering systemd factory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.715797 4949 factory.go:221] Registration of the systemd container factory successfully Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.715153 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c77eaf79d97de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,LastTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.716015 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716255 4949 factory.go:153] Registering CRI-O factory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716289 4949 factory.go:221] Registration of the crio container factory successfully Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716406 4949 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716447 4949 factory.go:103] Registering Raw factory Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.716478 4949 manager.go:1196] Started watching for new ooms in manager Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.724028 4949 manager.go:319] Starting recovery of all containers Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.724177 4949 server.go:460] "Adding debug handlers to kubelet server" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.731952 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732009 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732030 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732044 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732060 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732076 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732090 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732105 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732120 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732135 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732150 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732163 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732176 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732192 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732213 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732228 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732245 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732260 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732273 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732287 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732299 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732314 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732328 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732341 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732354 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732368 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732384 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732400 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732413 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732425 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732438 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732452 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732466 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732509 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732546 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732560 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732573 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732586 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732598 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732611 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732623 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732636 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732649 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732660 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732673 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732685 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732698 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732711 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732724 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732739 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732753 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732767 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732784 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732798 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732814 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732832 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732845 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732859 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732871 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732883 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732896 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732909 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732921 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732935 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732949 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732964 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732977 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.732989 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733002 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733015 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733027 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733040 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733053 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733066 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733080 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733093 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733108 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733122 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733136 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733150 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733164 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733177 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733190 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733204 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733218 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733231 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733243 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733259 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733273 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733289 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733301 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733322 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733336 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733349 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733363 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733379 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733393 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733405 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733417 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733429 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733441 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733454 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733466 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733479 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733497 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733512 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733554 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733569 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733584 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733599 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733613 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733628 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733644 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733658 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733673 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733687 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733700 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733714 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733727 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733743 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733755 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733769 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733782 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733794 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733807 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733822 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733836 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733848 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733872 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733886 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733898 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733912 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733924 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733938 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733952 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733966 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733980 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.733996 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734012 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734025 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734038 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734052 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734301 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734316 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734329 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734341 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734353 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734365 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734378 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734389 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734405 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734418 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734432 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734445 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734456 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734467 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734480 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734490 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734503 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734559 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734577 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734588 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734599 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734609 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734623 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734636 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734648 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734659 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734671 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734683 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734696 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734707 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734719 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734733 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734746 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734758 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734772 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734784 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734796 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734809 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734823 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734837 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734849 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734868 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734880 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734893 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734906 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734918 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734929 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734976 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.734993 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735008 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735025 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735039 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735052 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735065 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735260 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735273 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735286 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735303 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735315 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735328 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735394 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.735410 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736315 4949 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736345 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736360 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736373 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736388 4949 reconstruct.go:97] "Volume reconstruction finished" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.736398 4949 reconciler.go:26] "Reconciler: start to sync state" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.754058 4949 manager.go:324] Recovery completed Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.770713 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.777788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.777839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.777851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.780671 4949 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.780712 4949 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.780753 4949 state_mem.go:36] "Initialized new in-memory state store" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.785051 4949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.787629 4949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.787690 4949 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.787716 4949 kubelet.go:2335] "Starting kubelet main sync loop" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.787780 4949 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 14:50:04 crc kubenswrapper[4949]: W0120 14:50:04.790308 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.790782 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.793944 4949 policy_none.go:49] "None policy: Start" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.794799 4949 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.794832 4949 state_mem.go:35] "Initializing new in-memory state store" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.815864 4949 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.850621 4949 manager.go:334] "Starting Device Plugin manager" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.850689 4949 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.850705 4949 server.go:79] "Starting device plugin registration server" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851265 4949 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851288 4949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851542 4949 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851638 4949 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.851648 4949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.862355 4949 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.888600 4949 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.888747 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890233 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890273 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890419 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890764 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.890825 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891373 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891632 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.891680 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895331 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895355 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.895363 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896433 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896546 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896586 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.896639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.897999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.898014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.898182 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.898211 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.899423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.899454 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.899466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.915860 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.937919 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.937964 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.937991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938014 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938064 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938128 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938150 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938170 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938191 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938251 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938414 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938471 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.938562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.951706 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:04 crc kubenswrapper[4949]: I0120 14:50:04.953211 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:04 crc kubenswrapper[4949]: E0120 14:50:04.953828 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039772 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039803 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039834 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039912 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039943 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039977 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.039982 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040050 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040086 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040127 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040282 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040330 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040372 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040433 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040501 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040601 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040668 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040801 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040868 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.040170 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.041083 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.154832 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156897 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.156935 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.157544 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.226960 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.237021 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.260242 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.284286 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c WatchSource:0}: Error finding container e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c: Status 404 returned error can't find the container with id e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.285430 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d WatchSource:0}: Error finding container c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d: Status 404 returned error can't find the container with id c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.287762 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.291149 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256 WatchSource:0}: Error finding container 34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256: Status 404 returned error can't find the container with id 34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256 Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.294334 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.310206 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d WatchSource:0}: Error finding container d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d: Status 404 returned error can't find the container with id d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.316787 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.316949 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590 WatchSource:0}: Error finding container f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590: Status 404 returned error can't find the container with id f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590 Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.471234 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c77eaf79d97de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,LastTimestamp:2026-01-20 14:50:04.710115294 +0000 UTC m=+0.519946212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.558386 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560818 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.560855 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.561352 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.712950 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.714095 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:21:00.186034454 +0000 UTC Jan 20 14:50:05 crc kubenswrapper[4949]: W0120 14:50:05.741790 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:05 crc kubenswrapper[4949]: E0120 14:50:05.742025 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.796564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.796750 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d410656dfbcb994567dce6b90781c57b52b0030f91fbcf1167a7585e2bb5f00d"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.799310 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.799372 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"34fae220f67971a06228b59a6b1a4fe1c17e08023b0caccdebcf68e7be94a256"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.799548 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.802969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.803033 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c84d0b8844203d4b70ef500b9b75accc4de9dd99b14b49b84030e1cc0ebdb62d"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.803166 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.804128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.804160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.804172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.805370 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.805416 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e25e844a086d0ce9db9928e222b8dcbb1a1937a11b8beaceb519ee154535ad8c"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.805587 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.806449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.806483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.806495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.807215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.807266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7525c4e1c25356c83de59ad475a1e37782a795454afdff4f381280d6ac4f590"} Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.807359 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.808138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.808183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:05 crc kubenswrapper[4949]: I0120 14:50:05.808199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: W0120 14:50:06.104738 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.104843 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.118139 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Jan 20 14:50:06 crc kubenswrapper[4949]: W0120 14:50:06.118279 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.118451 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: W0120 14:50:06.244419 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.244503 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.362217 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364217 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.364270 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.365042 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.41:6443: connect: connection refused" node="crc" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.713436 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.714318 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:47:47.771411663 +0000 UTC Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.726610 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 14:50:06 crc kubenswrapper[4949]: E0120 14:50:06.728140 4949 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.811057 4949 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.811141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.811278 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.812300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.812346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.812359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.814538 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.814626 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.814795 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.815771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.815832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.815850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821350 4949 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821417 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821502 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821554 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821569 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.821723 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.822988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.823100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.823121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825270 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825342 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.825373 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.826872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.826908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.826923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.827897 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021" exitCode=0 Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.827965 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021"} Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.828109 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.828986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.829036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.829055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.833843 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.835183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.835210 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:06 crc kubenswrapper[4949]: I0120 14:50:06.835222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.713066 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.715200 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:12:29.683369653 +0000 UTC Jan 20 14:50:07 crc kubenswrapper[4949]: E0120 14:50:07.718741 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="3.2s" Jan 20 14:50:07 crc kubenswrapper[4949]: W0120 14:50:07.808437 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.41:6443: connect: connection refused Jan 20 14:50:07 crc kubenswrapper[4949]: E0120 14:50:07.808550 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.41:6443: connect: connection refused" logger="UnhandledError" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835072 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835119 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835130 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.835141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.836711 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.836799 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.837529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.837556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.837566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838491 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530" exitCode=0 Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838546 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530"} Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838572 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838593 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838702 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.838930 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.841009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.841107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.841125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.844248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.844289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.844304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.845083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.845112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.845125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.965732 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:07 crc kubenswrapper[4949]: I0120 14:50:07.967250 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.368449 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.716205 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:01:39.28803442 +0000 UTC Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.797088 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.846278 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f" exitCode=0 Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.846593 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f"} Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.846835 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.848469 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.848564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.848590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.854235 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475"} Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.854274 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.854454 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.855368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.855436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.855461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.856641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.856704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:08 crc kubenswrapper[4949]: I0120 14:50:08.856729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.438453 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.717265 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:02:09.111589546 +0000 UTC Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863037 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e"} Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863101 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5"} Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863123 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7"} Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863221 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863298 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.863728 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865632 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:09 crc kubenswrapper[4949]: I0120 14:50:09.865659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.717806 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:08:09.802952828 +0000 UTC Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.873936 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b"} Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874000 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e"} Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874060 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.874098 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876187 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.876206 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:10 crc kubenswrapper[4949]: I0120 14:50:10.980779 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.091670 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.297747 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.298021 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.299691 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.299732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.299741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.718569 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:26:10.009860568 +0000 UTC Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.797115 4949 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.797214 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.877777 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.877927 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.879986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:11 crc kubenswrapper[4949]: I0120 14:50:11.880005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.498814 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.499094 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.501290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.501361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.501383 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:12 crc kubenswrapper[4949]: I0120 14:50:12.719224 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:05:24.977805728 +0000 UTC Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.274911 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.275192 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.276819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.276875 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.276901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.288221 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.288452 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.289803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.289840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.289851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:13 crc kubenswrapper[4949]: I0120 14:50:13.720344 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:38:50.432247762 +0000 UTC Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.721445 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:43:56.15909642 +0000 UTC Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.728660 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.728975 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.730464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.730497 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:14 crc kubenswrapper[4949]: I0120 14:50:14.730510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:14 crc kubenswrapper[4949]: E0120 14:50:14.862540 4949 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.721767 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:02:10.61842003 +0000 UTC Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.755285 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.755505 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.757303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.757360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.757377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.762807 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.893727 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.895223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.895284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.895305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:15 crc kubenswrapper[4949]: I0120 14:50:15.900458 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.722874 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:49:51.28621141 +0000 UTC Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.895854 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.896775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.896828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:16 crc kubenswrapper[4949]: I0120 14:50:16.896850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:17 crc kubenswrapper[4949]: I0120 14:50:17.723604 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:28:37.916380004 +0000 UTC Jan 20 14:50:17 crc kubenswrapper[4949]: E0120 14:50:17.968504 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 20 14:50:18 crc kubenswrapper[4949]: W0120 14:50:18.054334 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.054465 4949 trace.go:236] Trace[620736880]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:08.052) (total time: 10001ms): Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[620736880]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:50:18.054) Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[620736880]: [10.001882519s] [10.001882519s] END Jan 20 14:50:18 crc kubenswrapper[4949]: E0120 14:50:18.054496 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 14:50:18 crc kubenswrapper[4949]: W0120 14:50:18.151134 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.151260 4949 trace.go:236] Trace[1023481260]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:08.149) (total time: 10001ms): Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[1023481260]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:50:18.151) Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[1023481260]: [10.001819364s] [10.001819364s] END Jan 20 14:50:18 crc kubenswrapper[4949]: E0120 14:50:18.151292 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.714500 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.724723 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:56:30.053882298 +0000 UTC Jan 20 14:50:18 crc kubenswrapper[4949]: W0120 14:50:18.936108 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 14:50:18 crc kubenswrapper[4949]: I0120 14:50:18.936211 4949 trace.go:236] Trace[952145599]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:08.933) (total time: 10002ms): Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[952145599]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:50:18.936) Jan 20 14:50:18 crc kubenswrapper[4949]: Trace[952145599]: [10.002375283s] [10.002375283s] END Jan 20 14:50:18 crc kubenswrapper[4949]: E0120 14:50:18.936234 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.466402 4949 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.466488 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.473321 4949 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.473384 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 14:50:19 crc kubenswrapper[4949]: I0120 14:50:19.725062 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:54:28.744797727 +0000 UTC Jan 20 14:50:20 crc kubenswrapper[4949]: I0120 14:50:20.725307 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:29:48.549594684 +0000 UTC Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.169614 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171207 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.171278 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:21 crc kubenswrapper[4949]: E0120 14:50:21.177126 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.671665 4949 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.726394 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:59:12.740320599 +0000 UTC Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.797899 4949 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 14:50:21 crc kubenswrapper[4949]: I0120 14:50:21.798295 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 14:50:22 crc kubenswrapper[4949]: I0120 14:50:22.728143 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:17:12.315102702 +0000 UTC Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.112058 4949 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.296485 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.303619 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.548233 4949 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.712210 4949 apiserver.go:52] "Watching apiserver" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.716179 4949 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.716657 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717236 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.717374 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717452 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717467 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.717553 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.717890 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.718001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.718115 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.718216 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.719658 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720625 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720787 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720816 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.720803 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.721259 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.721339 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.721856 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.723121 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.728361 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:00:35.616469918 +0000 UTC Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.755931 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.772260 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.791575 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.810570 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.814236 4949 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.825571 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.842490 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.859621 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.876607 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:23 crc kubenswrapper[4949]: I0120 14:50:23.914233 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 14:50:23 crc kubenswrapper[4949]: E0120 14:50:23.923399 4949 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.453471 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.456951 4949 trace.go:236] Trace[863191421]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 14:50:13.779) (total time: 10677ms): Jan 20 14:50:24 crc kubenswrapper[4949]: Trace[863191421]: ---"Objects listed" error: 10677ms (14:50:24.456) Jan 20 14:50:24 crc kubenswrapper[4949]: Trace[863191421]: [10.677092457s] [10.677092457s] END Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.457016 4949 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.459354 4949 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.476403 4949 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.492074 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.509128 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.526369 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.540954 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.557712 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560118 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560222 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560260 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560289 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560313 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560338 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560366 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560397 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560445 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560487 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560540 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560566 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560587 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560613 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560645 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560635 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560670 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560784 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560830 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560868 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560906 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560940 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.560981 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561015 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561049 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561083 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561088 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561117 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561154 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561188 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561230 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561302 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561368 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561406 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561442 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561475 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561511 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561581 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561615 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561652 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561684 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561697 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561756 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561790 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561887 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561923 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561928 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561939 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.561959 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562063 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562125 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562128 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562180 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562218 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562303 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562341 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562432 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562471 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562506 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562568 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562607 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562643 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562678 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562754 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562794 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562829 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562897 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562936 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562974 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563013 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563058 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563108 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563150 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563286 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563325 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563362 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563402 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563443 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563481 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563547 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563584 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563622 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563660 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563739 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563779 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563819 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563895 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563933 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563976 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564014 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564094 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564133 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564200 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564245 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564281 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564320 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564361 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564396 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564461 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564499 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564730 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564770 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564913 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564965 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565512 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565620 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565681 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565773 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565813 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565885 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565937 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565996 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566044 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566100 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566146 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566198 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566314 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566371 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566426 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566580 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566645 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566698 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566751 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566807 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566854 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566891 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566931 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566970 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567008 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567059 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567098 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567133 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567172 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567254 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567293 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567331 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567369 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567448 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567559 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567601 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567638 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567676 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567713 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567754 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567792 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567831 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567869 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567908 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.567947 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568000 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568055 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568095 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568135 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568174 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568213 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568251 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568329 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568370 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568409 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568448 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568487 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568555 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568595 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568633 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568671 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568711 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568748 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568786 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568827 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568867 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568905 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568946 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568982 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569022 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569099 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569139 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569177 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569338 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569426 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569478 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569549 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569600 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569645 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569727 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569816 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569854 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569895 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569997 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570026 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570053 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570081 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570104 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570126 4949 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580120 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562321 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562408 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582494 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562590 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562690 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562934 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.562958 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563107 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563387 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563504 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.563763 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564264 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564337 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564647 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564619 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564714 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564890 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.564976 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565085 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565128 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565568 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565765 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.565911 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566029 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566067 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566188 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566205 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566231 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566251 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566341 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566437 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566715 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566729 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566818 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566873 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.566954 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568058 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568567 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568581 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568640 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.568997 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569335 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569362 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.569745 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570056 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570111 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570169 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570198 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570731 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570867 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.570981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571039 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571367 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571864 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571912 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.571976 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572042 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572476 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572455 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.572856 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.072831568 +0000 UTC m=+20.882662426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.583652 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.583680 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.583783 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584082 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572887 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.572884 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573128 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573145 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573302 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573671 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573842 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573879 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.573960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574172 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574457 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.574798 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.575060 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.575677 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.575971 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576149 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576205 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576497 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576914 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.576937 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577060 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577088 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577105 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577784 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.577865 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578084 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578485 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.578807 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.579234 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.579315 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.579627 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580759 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580780 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.580841 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581144 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581223 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581670 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.581575 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582181 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582240 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.582374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.584274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585004 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585553 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585782 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585867 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.585969 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586046 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586253 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586310 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.586430 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587063 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587371 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587888 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.587891 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.588777 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.591482 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.591582 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.091562082 +0000 UTC m=+20.901392950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.591673 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592037 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592079 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592375 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592410 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592773 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.592819 4949 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.592920 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.593025 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.093004739 +0000 UTC m=+20.902835607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593056 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593142 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593243 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593640 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.593923 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.594044 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.598674 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.600833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.601483 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.605446 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.605623 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.605657 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.605735 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.605817 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.606267 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.606756 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.607885 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.612035 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.612769 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.613080 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.613893 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.613989 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.616807 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.620591 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.620694 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.120673951 +0000 UTC m=+20.930504809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.623713 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.624409 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.637959 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.647960 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.647993 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.648018 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: E0120 14:50:24.648065 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:25.148050083 +0000 UTC m=+20.957880941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.648634 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.650191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.650656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.638308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.659065 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.659878 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.659940 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660219 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660280 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660657 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.660971 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.663001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.665835 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.665854 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666108 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666327 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666410 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666751 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.666843 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667270 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667341 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667890 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.667953 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670682 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670892 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.670946 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671071 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671171 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671224 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671242 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671256 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671265 4949 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671274 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671284 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671297 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671308 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671317 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671327 4949 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671336 4949 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671345 4949 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671355 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671364 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671374 4949 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671384 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671394 4949 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671428 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671440 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671448 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671458 4949 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671486 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671499 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671554 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671565 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671576 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671618 4949 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671628 4949 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671637 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671646 4949 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671655 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671664 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671672 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671682 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671690 4949 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671700 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671756 4949 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671768 4949 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671776 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671786 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671796 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671808 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671818 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671828 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671839 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671849 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671879 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671890 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671901 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671722 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671910 4949 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671938 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671952 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671960 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671969 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671977 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671987 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.671997 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672006 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672016 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672027 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672035 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672044 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672052 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672061 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672070 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672079 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672089 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.672098 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673251 4949 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673315 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673332 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673344 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673363 4949 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673374 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673384 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673395 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673412 4949 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673423 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673433 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673442 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673456 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673465 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673475 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673486 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673501 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673510 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673550 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673566 4949 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673578 4949 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673587 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673596 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673610 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673620 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673630 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673639 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673652 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673662 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673671 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673684 4949 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673693 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673701 4949 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673710 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673722 4949 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673740 4949 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673750 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673760 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673772 4949 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673782 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673791 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673800 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673820 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673832 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673841 4949 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673855 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673865 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673876 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673887 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673902 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673915 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673927 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673938 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673953 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673963 4949 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.673973 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674041 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674056 4949 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674070 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674080 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674094 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674103 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674113 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674123 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674135 4949 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674145 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674156 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674165 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674180 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674194 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674206 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674224 4949 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674237 4949 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674246 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674256 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674270 4949 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674280 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674289 4949 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674298 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674313 4949 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674323 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674334 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674347 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674362 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674373 4949 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674383 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674417 4949 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674428 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674438 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674449 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674463 4949 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674472 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674482 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674492 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674528 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674538 4949 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674549 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674564 4949 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674574 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674584 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674593 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674605 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674614 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674625 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674634 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674648 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674658 4949 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674669 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674678 4949 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674693 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.674703 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.675933 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.675995 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.676061 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.692910 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.697088 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.729926 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:56:11.822979352 +0000 UTC Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.753735 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.770062 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.770041 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775161 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775200 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775211 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775224 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775236 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775247 4949 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775257 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775266 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.775276 4949 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.776186 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.786244 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.791820 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.792392 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.793829 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.794452 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.795463 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.795973 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.796571 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.797531 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.798138 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.798801 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.799178 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.799675 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.800952 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.801544 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.803712 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.804658 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.805356 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.806456 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.807045 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.807871 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.808636 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.809265 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.810085 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.811674 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.812791 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.813385 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.814208 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.814941 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.815323 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.815987 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.816956 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.817698 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.818336 4949 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.818479 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.821642 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.822707 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.824098 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.825954 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.826619 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.827597 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.828214 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.829299 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.829835 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.830652 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.830893 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.832145 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.833496 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.834294 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.835492 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.836321 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.838336 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.839066 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.840718 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.841594 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.842494 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.844166 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.844995 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.845967 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.858046 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.874699 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.892422 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.909479 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.918508 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76"} Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.918600 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b1ab9b79eb264bf0014fb2391f6f422e162c002c2543736ea4894a4d1c67500a"} Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.929470 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.940265 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.947554 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: W0120 14:50:24.955300 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1 WatchSource:0}: Error finding container ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1: Status 404 returned error can't find the container with id ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1 Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.958193 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.967604 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:24 crc kubenswrapper[4949]: I0120 14:50:24.981774 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.002284 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.023305 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.035399 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.051998 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.061427 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.075602 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.078646 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.078825 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.078799003 +0000 UTC m=+21.888629861 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.086114 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.100868 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.113115 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179609 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179662 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179686 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.179710 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179709 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179777 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.179764228 +0000 UTC m=+21.989595086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179823 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179841 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179853 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179887 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.179876182 +0000 UTC m=+21.989707050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179943 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179954 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179963 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.179990 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.179981835 +0000 UTC m=+21.989812703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.180039 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.180065 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:26.180057258 +0000 UTC m=+21.989888136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.730860 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:29:15.455770432 +0000 UTC Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.788394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.788394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.788544 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.788592 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.788495 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:25 crc kubenswrapper[4949]: E0120 14:50:25.788684 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.805862 4949 csr.go:261] certificate signing request csr-2kwn5 is approved, waiting to be issued Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.843869 4949 csr.go:257] certificate signing request csr-2kwn5 is issued Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.922131 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.923533 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b2375cb4dbc005ebf9f503b624410324d7ddca522dfaeee89b2862d39aa1ac60"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.925084 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.925112 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ab84e4f6e656b7d163d0a6434a4dbd0318ff2f64ea936ddeb2eb251e5d60d1b1"} Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.955574 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:25 crc kubenswrapper[4949]: I0120 14:50:25.983848 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.003659 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.026222 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.047483 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.063028 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.077535 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.087039 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.087198 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.087175297 +0000 UTC m=+23.897006155 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.095625 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.116814 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.132218 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.145271 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.156202 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.176530 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188331 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188387 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188412 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.188446 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188559 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188593 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188592 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188627 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188641 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188651 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188678 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.18865879 +0000 UTC m=+23.998489648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188604 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188696 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.188688061 +0000 UTC m=+23.998518919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188573 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188713 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.188705981 +0000 UTC m=+23.998536839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.188733 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.188718862 +0000 UTC m=+23.998549720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.192070 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.272291 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.286934 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.706720 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-gnfmv"] Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.707027 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.707226 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-sqr5x"] Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.708272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709193 4949 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709214 4949 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709231 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709283 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709537 4949 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709602 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.709805 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kgqjd"] Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709812 4949 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709969 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: W0120 14:50:26.709833 4949 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 20 14:50:26 crc kubenswrapper[4949]: E0120 14:50:26.709992 4949 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.710023 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.710164 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.710760 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.711438 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.712851 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.713372 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.713470 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.713641 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.718137 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.725710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.731222 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:03:12.755829601 +0000 UTC Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.736590 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.754371 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.772359 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.785643 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792439 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxdwg\" (UniqueName: \"kubernetes.io/projected/c0e8f07d-a71c-4c64-96f3-eecb529c1674-kube-api-access-hxdwg\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792491 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-cnibin\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792535 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-rootfs\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792627 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-proxy-tls\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792713 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-binary-copy\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792796 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc62j\" (UniqueName: \"kubernetes.io/projected/da08b8e6-19e1-41fa-8e71-2988f3effb27-kube-api-access-xc62j\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792831 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7xk\" (UniqueName: \"kubernetes.io/projected/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-kube-api-access-5b7xk\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c0e8f07d-a71c-4c64-96f3-eecb529c1674-hosts-file\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792886 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.792975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-os-release\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.793004 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-mcd-auth-proxy-config\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.793035 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-system-cni-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.800989 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.819996 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.833360 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.844703 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-20 14:45:25 +0000 UTC, rotation deadline is 2026-11-20 23:23:23.998892803 +0000 UTC Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.844760 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7304h32m57.154139948s for next certificate rotation Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.844755 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.857565 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.871643 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.883680 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894196 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-binary-copy\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894242 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894268 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc62j\" (UniqueName: \"kubernetes.io/projected/da08b8e6-19e1-41fa-8e71-2988f3effb27-kube-api-access-xc62j\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894292 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7xk\" (UniqueName: \"kubernetes.io/projected/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-kube-api-access-5b7xk\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c0e8f07d-a71c-4c64-96f3-eecb529c1674-hosts-file\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894352 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894375 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-mcd-auth-proxy-config\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894399 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-os-release\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-system-cni-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894501 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxdwg\" (UniqueName: \"kubernetes.io/projected/c0e8f07d-a71c-4c64-96f3-eecb529c1674-kube-api-access-hxdwg\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894985 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-cnibin\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895040 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-rootfs\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895155 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-cnibin\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894421 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c0e8f07d-a71c-4c64-96f3-eecb529c1674-hosts-file\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.894912 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-os-release\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895279 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-rootfs\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/da08b8e6-19e1-41fa-8e71-2988f3effb27-system-cni-dir\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895093 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-proxy-tls\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.895472 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-mcd-auth-proxy-config\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.904453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-proxy-tls\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.908445 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.912622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7xk\" (UniqueName: \"kubernetes.io/projected/2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e-kube-api-access-5b7xk\") pod \"machine-config-daemon-kgqjd\" (UID: \"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\") " pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.921609 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.928950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc62j\" (UniqueName: \"kubernetes.io/projected/da08b8e6-19e1-41fa-8e71-2988f3effb27-kube-api-access-xc62j\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.932659 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.954171 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.965794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:26 crc kubenswrapper[4949]: I0120 14:50:26.979393 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.000862 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:26Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.014259 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.032235 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:50:27 crc kubenswrapper[4949]: W0120 14:50:27.043835 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c9c7916_1f51_47f7_abe3_2ec9cd2a1f5e.slice/crio-28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1 WatchSource:0}: Error finding container 28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1: Status 404 returned error can't find the container with id 28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1 Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.074003 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2szcd"] Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.074424 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.074659 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.075495 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.076159 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.076394 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077199 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077324 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077450 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077535 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.077673 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.078899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.079206 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.090883 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.104466 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.117419 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.132133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.144944 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.157762 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.172169 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.182319 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.195681 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198091 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198140 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-k8s-cni-cncf-io\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198165 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198191 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198219 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198290 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198332 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198354 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198384 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-system-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198404 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-etc-kubernetes\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198424 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198454 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-hostroot\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198563 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-cnibin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198602 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-cni-binary-copy\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198623 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-kubelet\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198644 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9h4l\" (UniqueName: \"kubernetes.io/projected/3ac16078-f295-4f4b-875c-a8505e87b9da-kube-api-access-b9h4l\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-conf-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198741 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-multus-certs\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198765 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198785 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198807 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-netns\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198829 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-bin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198882 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198916 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-multus\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198935 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198949 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198963 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.198980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199000 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-daemon-config\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-os-release\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199040 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199062 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199136 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-socket-dir-parent\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199166 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.199194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.219533 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.235099 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.249791 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.261400 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.274059 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-os-release\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300193 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300222 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300260 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-socket-dir-parent\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300284 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300307 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300350 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-k8s-cni-cncf-io\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300375 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-os-release\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300445 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-socket-dir-parent\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300459 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300385 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300411 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300485 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300462 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-k8s-cni-cncf-io\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300543 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300574 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300654 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300677 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300678 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300704 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300749 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-system-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-etc-kubernetes\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300794 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300820 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-etc-kubernetes\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-hostroot\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300853 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-cnibin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300873 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-cni-binary-copy\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300895 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-kubelet\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9h4l\" (UniqueName: \"kubernetes.io/projected/3ac16078-f295-4f4b-875c-a8505e87b9da-kube-api-access-b9h4l\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300928 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-system-cni-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300940 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300960 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-hostroot\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300976 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-conf-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300997 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-multus-certs\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301049 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-netns\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-bin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301121 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-multus\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301144 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301165 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301189 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301239 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-daemon-config\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301261 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.300797 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301327 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301333 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-netns\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301371 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301391 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-cnibin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301398 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-bin\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301434 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301473 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-cni-multus\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301476 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-var-lib-kubelet\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301545 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301857 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-host-run-multus-certs\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301863 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-conf-dir\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.301894 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.302293 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.302574 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-multus-daemon-config\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.302977 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.305444 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.329577 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.339274 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"ovnkube-node-z6zd5\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.367073 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9h4l\" (UniqueName: \"kubernetes.io/projected/3ac16078-f295-4f4b-875c-a8505e87b9da-kube-api-access-b9h4l\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.370810 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.413481 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.416727 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:27 crc kubenswrapper[4949]: W0120 14:50:27.427262 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod775d7cfb_d5e3_457d_a7fa_4f0bdb752d04.slice/crio-5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7 WatchSource:0}: Error finding container 5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7: Status 404 returned error can't find the container with id 5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7 Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.430342 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.469345 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.483916 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.497972 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.529361 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.545638 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.559838 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.571509 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.577947 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.579569 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.586283 4949 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.586498 4949 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.587481 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.601674 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.605166 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.616124 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619126 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619141 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.619152 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.630761 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.634136 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.650825 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654932 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.654944 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.666977 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.667161 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.668993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.669110 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.731652 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:38:20.682302982 +0000 UTC Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.760466 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.771934 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.788939 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.789114 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.789601 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.789695 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.789764 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.789834 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.815258 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.853240 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.855598 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-binary-copy\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.862666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3ac16078-f295-4f4b-875c-a8505e87b9da-cni-binary-copy\") pod \"multus-2szcd\" (UID: \"3ac16078-f295-4f4b-875c-a8505e87b9da\") " pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874669 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.874741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.895420 4949 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 20 14:50:27 crc kubenswrapper[4949]: E0120 14:50:27.895565 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist podName:da08b8e6-19e1-41fa-8e71-2988f3effb27 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:28.395538767 +0000 UTC m=+24.205369625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-sqr5x" (UID: "da08b8e6-19e1-41fa-8e71-2988f3effb27") : failed to sync configmap cache: timed out waiting for the condition Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.930396 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.931742 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" exitCode=0 Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.931814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.931859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.933499 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.933550 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.933562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"28b1ee61635c47fb4b9b6e1d7eb7d86e8087e55f558d57041130383574f7aea1"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.965203 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.977164 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:27Z","lastTransitionTime":"2026-01-20T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.978133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.993282 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2szcd" Jan 20 14:50:27 crc kubenswrapper[4949]: I0120 14:50:27.994077 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:27Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.013425 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.017262 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.027882 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.038826 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.050126 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.062680 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.071387 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079505 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.079552 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.081010 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.092921 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.105835 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.109296 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.109489 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.109473516 +0000 UTC m=+27.919304374 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.129491 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.138824 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.145414 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.150643 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxdwg\" (UniqueName: \"kubernetes.io/projected/c0e8f07d-a71c-4c64-96f3-eecb529c1674-kube-api-access-hxdwg\") pod \"node-resolver-gnfmv\" (UID: \"c0e8f07d-a71c-4c64-96f3-eecb529c1674\") " pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.157753 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.167389 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.181215 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.182825 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.193277 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.206082 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210603 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210659 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210721 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.210745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210752 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210830 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.210813533 +0000 UTC m=+28.020644391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210875 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.210929 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.210913306 +0000 UTC m=+28.020744254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211002 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211018 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211029 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211054 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.21104616 +0000 UTC m=+28.020877128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211123 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211136 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211146 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: E0120 14:50:28.211177 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:32.211170254 +0000 UTC m=+28.021001112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.217304 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-gnfmv" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.218758 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.229741 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: W0120 14:50:28.233168 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0e8f07d_a71c_4c64_96f3_eecb529c1674.slice/crio-9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592 WatchSource:0}: Error finding container 9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592: Status 404 returned error can't find the container with id 9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592 Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.244776 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.277899 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.285687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.286069 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.289228 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.307705 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.347850 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.389993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390049 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.390080 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.412688 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.413379 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/da08b8e6-19e1-41fa-8e71-2988f3effb27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sqr5x\" (UID: \"da08b8e6-19e1-41fa-8e71-2988f3effb27\") " pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494375 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.494404 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.526871 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" Jan 20 14:50:28 crc kubenswrapper[4949]: W0120 14:50:28.548423 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda08b8e6_19e1_41fa_8e71_2988f3effb27.slice/crio-23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b WatchSource:0}: Error finding container 23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b: Status 404 returned error can't find the container with id 23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597759 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.597845 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.701931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702337 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.702350 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.732015 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:21:23.05294025 +0000 UTC Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.803091 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.806952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.807023 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.807678 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.810989 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.816778 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.835133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.855938 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.875558 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.898460 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917951 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.917991 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:28Z","lastTransitionTime":"2026-01-20T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.922352 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.941618 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947464 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947508 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947582 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947592 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.947603 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.950010 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gnfmv" event={"ID":"c0e8f07d-a71c-4c64-96f3-eecb529c1674","Type":"ContainerStarted","Data":"cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.950038 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-gnfmv" event={"ID":"c0e8f07d-a71c-4c64-96f3-eecb529c1674","Type":"ContainerStarted","Data":"9313f1bb6eb6445d8d48b38b1a8724d2e02bdbbb172e060dcd1480060353a592"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.951902 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.951966 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"d4da37859dee95109f10ecb8a58f89743652a53cf9c32e2927206f0f473a79bd"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.953885 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.953924 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"23bc7ca34848d59dc4adb2c2a8c1b8f62fa8f1e4bd5450fe16a56b4db2ec068b"} Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.957102 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.974226 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.988419 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hzkk7"] Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.988838 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.989731 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.990091 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.990243 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.990838 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 14:50:28 crc kubenswrapper[4949]: I0120 14:50:28.991464 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.001149 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:28Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.013467 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020328 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.020355 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.026967 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.048243 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.060700 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.077470 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.091968 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.110717 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.121659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-serviceca\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.121719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694ct\" (UniqueName: \"kubernetes.io/projected/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-kube-api-access-694ct\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.121745 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-host\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.123709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.123859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.123987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.124029 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.124083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.124258 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.140464 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.180128 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.218176 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222615 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-host\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-serviceca\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694ct\" (UniqueName: \"kubernetes.io/projected/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-kube-api-access-694ct\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.222778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-host\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.223646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-serviceca\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.226923 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.266109 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694ct\" (UniqueName: \"kubernetes.io/projected/6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf-kube-api-access-694ct\") pod \"node-ca-hzkk7\" (UID: \"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\") " pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.278777 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.319499 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329894 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.329926 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.346345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hzkk7" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.360635 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.404413 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432493 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.432570 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.440785 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.479204 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.535024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536119 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.536151 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.641322 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.732639 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:55:20.077854693 +0000 UTC Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.744415 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.788656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:29 crc kubenswrapper[4949]: E0120 14:50:29.788795 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.788928 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.788656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:29 crc kubenswrapper[4949]: E0120 14:50:29.789119 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:29 crc kubenswrapper[4949]: E0120 14:50:29.789433 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847129 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.847142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949891 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949902 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.949941 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:29Z","lastTransitionTime":"2026-01-20T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.957705 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4" exitCode=0 Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.957782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.959714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hzkk7" event={"ID":"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf","Type":"ContainerStarted","Data":"5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.959786 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hzkk7" event={"ID":"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf","Type":"ContainerStarted","Data":"4d0f16022c36144668b61a893493fa80b463928f98cac81ff81851bf5710231f"} Jan 20 14:50:29 crc kubenswrapper[4949]: I0120 14:50:29.974157 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.003938 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.023143 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.035470 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.046185 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.052873 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.058959 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.073993 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.087352 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.101354 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.120676 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.141718 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.155558 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.171480 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.195982 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.214310 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.236071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.257838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.258615 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.259710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.273608 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.284725 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.299166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.312646 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.325292 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.357136 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360905 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.360936 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.396947 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.439821 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463902 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.463912 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.477166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.518358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.558941 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.567300 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.595977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.644730 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.669844 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.679788 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.732969 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:14:18.775738944 +0000 UTC Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.772681 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.875111 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.963863 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50" exitCode=0 Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.963914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977170 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.977207 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:30Z","lastTransitionTime":"2026-01-20T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:30 crc kubenswrapper[4949]: I0120 14:50:30.993071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:30Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.005292 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.017736 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.028974 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.048373 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.061579 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.075977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.080220 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.088121 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.099619 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.115563 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.128579 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.158088 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.182969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.183054 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.198098 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.234764 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.278068 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286133 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.286143 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.389388 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.492375 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.600668 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704630 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.704657 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.733854 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:11:43.189921566 +0000 UTC Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.788505 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:31 crc kubenswrapper[4949]: E0120 14:50:31.788671 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.788802 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:31 crc kubenswrapper[4949]: E0120 14:50:31.789188 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.789244 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:31 crc kubenswrapper[4949]: E0120 14:50:31.789321 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806783 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806836 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.806872 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912624 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.912669 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:31Z","lastTransitionTime":"2026-01-20T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.971039 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.972941 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed" exitCode=0 Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.973474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed"} Jan 20 14:50:31 crc kubenswrapper[4949]: I0120 14:50:31.990484 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.001541 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:31Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.012911 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015254 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.015314 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.031358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.046755 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.061083 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.071323 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.083066 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.096120 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.112671 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117746 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117757 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117772 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.117783 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.130247 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.147095 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.152989 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.153156 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.153143682 +0000 UTC m=+35.962974540 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.157921 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.175794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.194142 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221142 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.221154 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.253980 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.254044 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.254078 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.254103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254181 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254198 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254213 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254228 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254238 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254244 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254204 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254300 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254295 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254242102 +0000 UTC m=+36.064072970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254333 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254324564 +0000 UTC m=+36.064155422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254345 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254340075 +0000 UTC m=+36.064170933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:32 crc kubenswrapper[4949]: E0120 14:50:32.254365 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:40.254358855 +0000 UTC m=+36.064189713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.323942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.323988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.323999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.324016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.324029 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426751 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.426765 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.529111 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.631968 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632046 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.632057 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.734699 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:07:02.596406635 +0000 UTC Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.735385 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.838644 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.941277 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:32Z","lastTransitionTime":"2026-01-20T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.980914 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b" exitCode=0 Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.980964 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b"} Jan 20 14:50:32 crc kubenswrapper[4949]: I0120 14:50:32.996947 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:32Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.017249 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.031395 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043447 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.043477 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.051485 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.077543 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.097126 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.120323 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.151328 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.180037 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.221582 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.241112 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253505 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.253542 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.254170 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.277308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.292167 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.310110 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.321746 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:33Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356440 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.356542 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458789 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.458828 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.561333 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.663939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.663984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.663995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.664014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.664028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.735170 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:48:53.961380966 +0000 UTC Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.766962 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.788549 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.788574 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.788632 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:33 crc kubenswrapper[4949]: E0120 14:50:33.788760 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:33 crc kubenswrapper[4949]: E0120 14:50:33.788910 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:33 crc kubenswrapper[4949]: E0120 14:50:33.789126 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.870566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.973217 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:33Z","lastTransitionTime":"2026-01-20T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.988089 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.998791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e"} Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.999655 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.999686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:33 crc kubenswrapper[4949]: I0120 14:50:33.999699 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.020596 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.029394 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.035659 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.042689 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.064986 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075726 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075802 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.075840 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.080272 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.099810 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.117789 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.135296 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.153885 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.174593 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.178858 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.178949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.178972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.179003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.179023 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.191800 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.209208 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.242903 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.259634 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282467 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.282949 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.301922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.326236 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.341109 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.358143 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.371364 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.385197 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.391413 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.406254 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.423568 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.437788 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.454380 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.471032 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488407 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.488801 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.510203 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.525220 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.542196 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.564166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591879 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591930 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.591968 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.593695 4949 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.694703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.735764 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:43:42.563539574 +0000 UTC Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797390 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.797407 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.808091 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.828368 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.850160 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.882607 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.899666 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900470 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.900511 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:34Z","lastTransitionTime":"2026-01-20T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.909971 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.925435 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.952689 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.965906 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.979041 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:34 crc kubenswrapper[4949]: I0120 14:50:34.990290 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:34Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.002719 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.005224 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708" exitCode=0 Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.005325 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.011978 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.030047 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.045803 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.063345 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.081909 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.095716 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.104993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.105010 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.115699 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.130459 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.152083 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.167778 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.181891 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.196811 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.207918 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.211449 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.229976 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.241239 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.252255 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.269358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.283013 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.294303 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:35Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.310952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.311540 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.413785 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.413954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.414020 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.414109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.414172 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.517447 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.517816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.517943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.518075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.518238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.621238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.723738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724148 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.724730 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.736316 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:20:57.569609283 +0000 UTC Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.788809 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:35 crc kubenswrapper[4949]: E0120 14:50:35.788922 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.789175 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:35 crc kubenswrapper[4949]: E0120 14:50:35.789300 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.789327 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:35 crc kubenswrapper[4949]: E0120 14:50:35.789399 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.827923 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:35 crc kubenswrapper[4949]: I0120 14:50:35.930763 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:35Z","lastTransitionTime":"2026-01-20T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.011961 4949 generic.go:334] "Generic (PLEG): container finished" podID="da08b8e6-19e1-41fa-8e71-2988f3effb27" containerID="3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86" exitCode=0 Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.012069 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerDied","Data":"3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.026301 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035774 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.035826 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.048556 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.064830 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.080128 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.093638 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.112273 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.130823 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.138484 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.163373 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.181884 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.205104 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.219315 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.242928 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.245305 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.265306 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.279678 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.291425 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:36Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345852 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.345887 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.448572 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.551586 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654905 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.654917 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.737254 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:10:00.030606219 +0000 UTC Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.757954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861264 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.861394 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.963944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:36 crc kubenswrapper[4949]: I0120 14:50:36.964181 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:36Z","lastTransitionTime":"2026-01-20T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.023024 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" event={"ID":"da08b8e6-19e1-41fa-8e71-2988f3effb27","Type":"ContainerStarted","Data":"bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.027852 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/0.log" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.032396 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e" exitCode=1 Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.032444 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.033213 4949 scope.go:117] "RemoveContainer" containerID="9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.051929 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.066765 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.067361 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.081416 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.098772 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.113341 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.128507 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.147629 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.167023 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.169776 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.179852 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.201795 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.224245 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.254099 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.270599 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.272653 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.287281 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.298028 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.316414 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"message\\\":\\\"r removal\\\\nI0120 14:50:36.574868 6200 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 14:50:36.574890 6200 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 14:50:36.574899 6200 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 14:50:36.574933 6200 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 14:50:36.574946 6200 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 14:50:36.574949 6200 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 14:50:36.574987 6200 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:36.574998 6200 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:50:36.575003 6200 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 14:50:36.575005 6200 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 14:50:36.575013 6200 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 14:50:36.575024 6200 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 14:50:36.575031 6200 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:36.575101 6200 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 14:50:36.575153 6200 factory.go:656] Stopping watch factory\\\\nI0120 14:50:36.575182 6200 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.331164 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.342281 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.353407 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.365577 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.374911 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.392089 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.409570 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.428209 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.447531 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.460432 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.474237 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477158 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.477189 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.506695 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.519397 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.537229 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.548355 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:37Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.579925 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.683181 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.737873 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:45:54.35548092 +0000 UTC Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.785782 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.788851 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.788903 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:37 crc kubenswrapper[4949]: E0120 14:50:37.788964 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.788865 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:37 crc kubenswrapper[4949]: E0120 14:50:37.789077 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:37 crc kubenswrapper[4949]: E0120 14:50:37.789141 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.888750 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991159 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:37 crc kubenswrapper[4949]: I0120 14:50:37.991222 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:37Z","lastTransitionTime":"2026-01-20T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.037348 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.038312 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.039438 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/0.log" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.043856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.045174 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.045421 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.051403 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060601 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.060776 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.072497 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.075855 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.080142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.084972 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.094778 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.097315 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099501 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.099554 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.113964 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.116645 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.121628 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.128917 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.137268 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: E0120 14:50:38.137392 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.138978 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.145077 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.167808 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.186284 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.211325 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.224766 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.241356 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.252660 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"message\\\":\\\"r removal\\\\nI0120 14:50:36.574868 6200 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 14:50:36.574890 6200 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0120 14:50:36.574899 6200 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 14:50:36.574933 6200 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 14:50:36.574946 6200 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 14:50:36.574949 6200 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0120 14:50:36.574987 6200 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:36.574998 6200 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:50:36.575003 6200 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 14:50:36.575005 6200 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 14:50:36.575013 6200 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 14:50:36.575024 6200 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 14:50:36.575031 6200 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:36.575101 6200 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0120 14:50:36.575153 6200 factory.go:656] Stopping watch factory\\\\nI0120 14:50:36.575182 6200 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.267378 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.288476 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.303250 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:38Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.344559 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448130 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.448239 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550119 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.550149 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.653404 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.738465 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:36:08.989457762 +0000 UTC Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.757632 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859744 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.859805 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962748 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:38 crc kubenswrapper[4949]: I0120 14:50:38.962907 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:38Z","lastTransitionTime":"2026-01-20T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.050789 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.051839 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/0.log" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.057087 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" exitCode=1 Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.057145 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.057212 4949 scope.go:117] "RemoveContainer" containerID="9f737b83e80789dc886713f691738507daa3d59c49a0032d7a6d7dbe148b1d6e" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.058375 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.058816 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.065961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066041 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.066097 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.079977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.099007 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.113894 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.121223 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb"] Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.121948 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.124850 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.125024 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.133401 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.150334 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.166967 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168895 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168905 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.168939 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.182701 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.199922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.211540 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230560 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230609 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.230644 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcs88\" (UniqueName: \"kubernetes.io/projected/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-kube-api-access-jcs88\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.241700 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.257368 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.271942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272053 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.272117 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.282191 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.299549 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.316636 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332329 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcs88\" (UniqueName: \"kubernetes.io/projected/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-kube-api-access-jcs88\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332389 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.332478 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.333158 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.333287 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.333891 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.340770 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.349697 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.353936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcs88\" (UniqueName: \"kubernetes.io/projected/9957b569-5b87-4d8d-bec2-4a5d4a8b891c-kube-api-access-jcs88\") pod \"ovnkube-control-plane-749d76644c-ghqnb\" (UID: \"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.364873 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377143 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.377162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.382321 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.398247 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.419145 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.434012 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.441446 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.463588 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.480494 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.481610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.482478 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.500235 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.522928 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: W0120 14:50:39.546750 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9957b569_5b87_4d8d_bec2_4a5d4a8b891c.slice/crio-1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e WatchSource:0}: Error finding container 1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e: Status 404 returned error can't find the container with id 1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.554129 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.575053 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.585649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.590403 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.604994 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.623786 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.641782 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:39Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.687933 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.739503 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:51:27.138296898 +0000 UTC Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.788022 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.788111 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.788111 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.788271 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.788354 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:39 crc kubenswrapper[4949]: E0120 14:50:39.788429 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789976 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.789992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.790004 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.893354 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:39 crc kubenswrapper[4949]: I0120 14:50:39.996846 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:39Z","lastTransitionTime":"2026-01-20T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.064718 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" event={"ID":"9957b569-5b87-4d8d-bec2-4a5d4a8b891c","Type":"ContainerStarted","Data":"1e7bdecc3873579562c8c437c317dd59fffbb519a2e670015c51309d4c87494e"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.068127 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099673 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.099684 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.203415 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.241936 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.242273 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.242243732 +0000 UTC m=+52.052074630 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306582 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.306649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343115 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343179 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343217 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.343256 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343337 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343348 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343388 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343400 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343488 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343375 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343555 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343565 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343417 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.343397694 +0000 UTC m=+52.153228552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343594 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.34358436 +0000 UTC m=+52.153415218 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343614 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.343605641 +0000 UTC m=+52.153436499 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.343627 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.343620871 +0000 UTC m=+52.153451729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409857 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.409885 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512331 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.512347 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.611235 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hlfls"] Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.611763 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.611831 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.615276 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.632845 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.651403 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.668044 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.685783 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.697757 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.710613 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.717930 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.726477 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.739793 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:02:50.312867379 +0000 UTC Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.744202 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.747626 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.747743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7s46\" (UniqueName: \"kubernetes.io/projected/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-kube-api-access-r7s46\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.759564 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.770908 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.784966 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.809014 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820668 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.820677 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.824770 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.841584 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.848221 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.848313 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7s46\" (UniqueName: \"kubernetes.io/projected/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-kube-api-access-r7s46\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.848437 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: E0120 14:50:40.848534 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:41.34849543 +0000 UTC m=+37.158326298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.855146 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.866900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7s46\" (UniqueName: \"kubernetes.io/projected/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-kube-api-access-r7s46\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.868840 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.895981 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:40Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.923830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:40 crc kubenswrapper[4949]: I0120 14:50:40.924274 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:40Z","lastTransitionTime":"2026-01-20T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.027843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.028847 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.077698 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" event={"ID":"9957b569-5b87-4d8d-bec2-4a5d4a8b891c","Type":"ContainerStarted","Data":"77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.132096 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236733 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.236761 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339867 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.339954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.353387 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.353491 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.353570 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:42.353554476 +0000 UTC m=+38.163385334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.443332 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546739 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546767 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.546785 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.649900 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.741238 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:39:40.129714503 +0000 UTC Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752431 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.752557 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.788936 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.788997 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.789085 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.789085 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.789197 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:41 crc kubenswrapper[4949]: E0120 14:50:41.789316 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856141 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.856164 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.958979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959044 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:41 crc kubenswrapper[4949]: I0120 14:50:41.959078 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:41Z","lastTransitionTime":"2026-01-20T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.061968 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.094216 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" event={"ID":"9957b569-5b87-4d8d-bec2-4a5d4a8b891c","Type":"ContainerStarted","Data":"efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.115620 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.135687 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.155277 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.166732 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.172165 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.190915 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.207399 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.219417 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.233739 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.249842 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.268508 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.269566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.284019 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.297729 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.318135 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.336594 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.350885 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.361425 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.363828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:42 crc kubenswrapper[4949]: E0120 14:50:42.364129 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:42 crc kubenswrapper[4949]: E0120 14:50:42.364299 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:44.364268996 +0000 UTC m=+40.174099854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.371436 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.372709 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:42Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.474236 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.577258 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.680703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.742209 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:34:29.132851326 +0000 UTC Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786757 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.786792 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.788354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:42 crc kubenswrapper[4949]: E0120 14:50:42.788637 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.890327 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:42 crc kubenswrapper[4949]: I0120 14:50:42.993684 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:42Z","lastTransitionTime":"2026-01-20T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.096713 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.202315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.203544 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306794 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.306832 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.410257 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.513936 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.514140 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616617 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.616646 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.719304 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.742894 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:11:07.789330887 +0000 UTC Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.788631 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.788721 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:43 crc kubenswrapper[4949]: E0120 14:50:43.788753 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.788634 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:43 crc kubenswrapper[4949]: E0120 14:50:43.788883 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:43 crc kubenswrapper[4949]: E0120 14:50:43.788976 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822314 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.822340 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925930 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:43 crc kubenswrapper[4949]: I0120 14:50:43.925960 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:43Z","lastTransitionTime":"2026-01-20T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029095 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.029198 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131724 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.131743 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.235420 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338633 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.338709 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.387616 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:44 crc kubenswrapper[4949]: E0120 14:50:44.387808 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:44 crc kubenswrapper[4949]: E0120 14:50:44.387892 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:48.387867255 +0000 UTC m=+44.197698153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.441944 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544512 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544534 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.544558 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.647865 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.743582 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:30:23.431328401 +0000 UTC Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.750963 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.788276 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:44 crc kubenswrapper[4949]: E0120 14:50:44.788419 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.808606 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.828035 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.848121 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.857895 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.857962 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.857978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.858059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.858102 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.871611 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.885736 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.898407 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.917133 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.929998 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.954173 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961512 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.961574 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:44Z","lastTransitionTime":"2026-01-20T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.969636 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:44 crc kubenswrapper[4949]: I0120 14:50:44.992097 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:44Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.003977 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.018954 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.047702 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064492 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064507 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.064532 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.067591 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.082167 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.094252 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:45Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.167432 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270376 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.270437 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.374174 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.477405 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.580974 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.581118 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683293 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.683382 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.743809 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:14:52.727963133 +0000 UTC Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.786478 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.788909 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.788918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.788940 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:45 crc kubenswrapper[4949]: E0120 14:50:45.789007 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:45 crc kubenswrapper[4949]: E0120 14:50:45.789109 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:45 crc kubenswrapper[4949]: E0120 14:50:45.789217 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889780 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889815 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889852 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.889863 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992715 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:45 crc kubenswrapper[4949]: I0120 14:50:45.992752 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:45Z","lastTransitionTime":"2026-01-20T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095143 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.095179 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198447 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.198683 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302375 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.302571 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.404738 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.507649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.610985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.611156 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713338 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.713420 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.744926 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:16:32.520740933 +0000 UTC Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.788475 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:46 crc kubenswrapper[4949]: E0120 14:50:46.788724 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.816631 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920596 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920643 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:46 crc kubenswrapper[4949]: I0120 14:50:46.920669 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:46Z","lastTransitionTime":"2026-01-20T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.023462 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.126920 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.126978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.126992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.127010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.127024 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.228957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229021 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229038 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.229050 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.332199 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.435630 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.538733 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641785 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641810 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.641865 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.744298 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.745507 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:53:42.893962413 +0000 UTC Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.788864 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.788954 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.788965 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:47 crc kubenswrapper[4949]: E0120 14:50:47.789114 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:47 crc kubenswrapper[4949]: E0120 14:50:47.789257 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:47 crc kubenswrapper[4949]: E0120 14:50:47.789378 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.847877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.847963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.847986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.848013 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.848036 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:47 crc kubenswrapper[4949]: I0120 14:50:47.951384 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:47Z","lastTransitionTime":"2026-01-20T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.054256 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157413 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.157454 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.222206 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.243335 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249422 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.249506 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.268642 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276584 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.276662 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.298417 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304542 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.304588 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.322794 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.328741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.345937 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:48Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.346054 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348842 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.348925 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.430386 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.430706 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.430833 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:50:56.430800319 +0000 UTC m=+52.240631207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.451876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.451944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.451967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.452000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.452020 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.554667 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656632 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.656711 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.746254 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:37:58.469973743 +0000 UTC Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759736 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.759813 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.788355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:48 crc kubenswrapper[4949]: E0120 14:50:48.788639 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.862748 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.965881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:48 crc kubenswrapper[4949]: I0120 14:50:48.966628 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:48Z","lastTransitionTime":"2026-01-20T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.069630 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.173402 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276593 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.276676 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.380116 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.483232 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.586917 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.689222 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.746727 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:04:53.222501832 +0000 UTC Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.788283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.788340 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.788488 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:49 crc kubenswrapper[4949]: E0120 14:50:49.788486 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:49 crc kubenswrapper[4949]: E0120 14:50:49.788712 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:49 crc kubenswrapper[4949]: E0120 14:50:49.789464 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.790041 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792382 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.792437 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896547 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:49 crc kubenswrapper[4949]: I0120 14:50:49.896566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999187 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:49.999240 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:49Z","lastTransitionTime":"2026-01-20T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101701 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.101712 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.128052 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.131907 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.132305 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.153581 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.178344 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.196728 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.204464 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.213068 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.237366 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.268137 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.286077 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.302886 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308174 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.308190 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.316543 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.328943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.338871 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.350813 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.364585 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.378587 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.389317 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.399310 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.410941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.410985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.410997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.411014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.411027 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.423126 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:50Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513247 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.513341 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.615348 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717879 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717924 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.717952 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.747452 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:36:22.689668057 +0000 UTC Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.788431 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:50 crc kubenswrapper[4949]: E0120 14:50:50.788633 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820613 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.820622 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.922929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.922988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.923004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.923026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:50 crc kubenswrapper[4949]: I0120 14:50:50.923043 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:50Z","lastTransitionTime":"2026-01-20T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.026693 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.129346 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.137401 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.138687 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/1.log" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.142596 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" exitCode=1 Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.142651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.142698 4949 scope.go:117] "RemoveContainer" containerID="6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.143495 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.143927 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.191113 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.214222 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233242 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.233314 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.240270 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.259226 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.275234 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.292897 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d443874ed20afddec043256a6647f204f76a3813a90fc6b3a6e37b74b32c525\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:38Z\\\",\\\"message\\\":\\\"oval\\\\nI0120 14:50:37.964396 6371 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 14:50:37.964411 6371 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 14:50:37.964501 6371 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.964686 6371 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:50:37.965103 6371 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965537 6371 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965772 6371 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:50:37.965923 6371 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966357 6371 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 14:50:37.966380 6371 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.312585 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.326729 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.336249 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.339975 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.351308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.364383 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.377819 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.394065 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.405135 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.417718 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.428673 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.438508 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:51Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.439712 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.543555 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647932 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.647981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.648039 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.748682 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:09:02.616551918 +0000 UTC Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750716 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.750729 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.788576 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.788697 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.788576 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.788760 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.788972 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:51 crc kubenswrapper[4949]: E0120 14:50:51.789225 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.853405 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:51 crc kubenswrapper[4949]: I0120 14:50:51.956693 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:51Z","lastTransitionTime":"2026-01-20T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060314 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.060448 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.151031 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.156811 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:50:52 crc kubenswrapper[4949]: E0120 14:50:52.157126 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162770 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.162900 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.179981 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.199102 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.214378 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.229480 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.242866 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.260091 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265866 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.265940 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.277136 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.294499 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.305992 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.320932 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.343900 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.369699 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.372704 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.389856 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.410014 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.424419 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.440579 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.456171 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.472885 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.503789 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.515160 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.530711 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.551020 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.565582 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.575872 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.581754 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.593049 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.605238 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.623910 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.637588 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.652257 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.669547 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.678396 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.688594 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.710341 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.725396 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.748969 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:33:27.960967303 +0000 UTC Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.749167 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.766378 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.780950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.781092 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.788484 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:52 crc kubenswrapper[4949]: E0120 14:50:52.788748 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.790720 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.807087 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:52Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.884323 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:52 crc kubenswrapper[4949]: I0120 14:50:52.987309 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:52Z","lastTransitionTime":"2026-01-20T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.091120 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.193694 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296366 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.296406 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398593 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.398664 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.501980 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605328 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.605446 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709185 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.709381 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.749726 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:14:34.190966962 +0000 UTC Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.791871 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.791947 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:53 crc kubenswrapper[4949]: E0120 14:50:53.792099 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.792147 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:53 crc kubenswrapper[4949]: E0120 14:50:53.792282 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:53 crc kubenswrapper[4949]: E0120 14:50:53.792476 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.812932 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.916986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917022 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917030 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:53 crc kubenswrapper[4949]: I0120 14:50:53.917052 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:53Z","lastTransitionTime":"2026-01-20T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.019605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.019856 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.020000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.020120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.020227 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.122988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.123119 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.225719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.225996 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.226062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.226121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.226192 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.329233 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431668 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.431766 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.535229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.535987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.536006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.536029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.536045 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.639443 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742891 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.742938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.750238 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:40:16.057542066 +0000 UTC Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.788554 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:54 crc kubenswrapper[4949]: E0120 14:50:54.788810 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.803621 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.821922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.842042 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.848954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.857138 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.877788 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.893885 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.909202 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.928071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.946341 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951578 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.951644 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:54Z","lastTransitionTime":"2026-01-20T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.979215 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:54 crc kubenswrapper[4949]: I0120 14:50:54.996882 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:54Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.014950 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.030669 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.055072 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.055846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056228 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.056487 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.086678 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.106659 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.128333 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.145661 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:55Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.160859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.161569 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.264978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265046 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.265128 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368341 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368445 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.368462 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.471824 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.575364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.575733 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.575881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.576023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.576252 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.678832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679796 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.679856 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.751196 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:11:21.651614726 +0000 UTC Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.782959 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.788464 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.788492 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.788544 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:55 crc kubenswrapper[4949]: E0120 14:50:55.788617 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:55 crc kubenswrapper[4949]: E0120 14:50:55.788704 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:55 crc kubenswrapper[4949]: E0120 14:50:55.788813 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885896 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.885936 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989179 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989207 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:55 crc kubenswrapper[4949]: I0120 14:50:55.989225 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:55Z","lastTransitionTime":"2026-01-20T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091778 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.091858 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.194163 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297279 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.297364 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.316733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.316927 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.3168855 +0000 UTC m=+84.126716388 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.400474 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418304 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418438 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.418481 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418673 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418698 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418716 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418775 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.418754775 +0000 UTC m=+84.228585663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418844 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418885 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.418872539 +0000 UTC m=+84.228703427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.418919 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419039 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.419014673 +0000 UTC m=+84.228845601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419064 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419100 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419119 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.419209 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:51:28.419179588 +0000 UTC m=+84.229010476 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.503826 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.518966 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.519200 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.519316 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:51:12.519282726 +0000 UTC m=+68.329113644 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607555 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.607581 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710825 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.710872 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.751613 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:02:04.33305525 +0000 UTC Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.789027 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:56 crc kubenswrapper[4949]: E0120 14:50:56.789206 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814866 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.814882 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:56 crc kubenswrapper[4949]: I0120 14:50:56.918782 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:56Z","lastTransitionTime":"2026-01-20T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021174 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021666 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021770 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.021866 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124291 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.124305 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227558 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.227594 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330293 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.330306 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.433931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.434285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.434556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.434794 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.435015 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.537987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538038 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.538080 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.640781 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744441 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.744493 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.752063 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:26:40.996577561 +0000 UTC Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.788479 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.789071 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.789385 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:57 crc kubenswrapper[4949]: E0120 14:50:57.789365 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:57 crc kubenswrapper[4949]: E0120 14:50:57.789617 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:57 crc kubenswrapper[4949]: E0120 14:50:57.789774 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.848143 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:57 crc kubenswrapper[4949]: I0120 14:50:57.950786 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:57Z","lastTransitionTime":"2026-01-20T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.054319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.054812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.054995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.055163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.055374 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.158940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159008 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.159079 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.262899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.263208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.263592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.263969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.264162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.367929 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.470708 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574130 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.574230 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.576140 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.596724 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602513 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.602559 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.627204 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.632946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.633078 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.654687 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.660192 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.677907 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.681990 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.701948 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:50:58Z is after 2025-08-24T17:21:41Z" Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.702240 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.703944 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.753713 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:10:01.398985922 +0000 UTC Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.788288 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:50:58 crc kubenswrapper[4949]: E0120 14:50:58.788484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806721 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.806806 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910158 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910176 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:58 crc kubenswrapper[4949]: I0120 14:50:58.910187 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:58Z","lastTransitionTime":"2026-01-20T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013094 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.013129 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.117645 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.220242 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323751 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.323769 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427234 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.427339 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.531643 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.635304 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738691 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.738830 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.754912 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:58:36.454111325 +0000 UTC Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.788386 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.788402 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.788508 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:50:59 crc kubenswrapper[4949]: E0120 14:50:59.788706 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:50:59 crc kubenswrapper[4949]: E0120 14:50:59.788863 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:50:59 crc kubenswrapper[4949]: E0120 14:50:59.789009 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842434 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.842450 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:50:59 crc kubenswrapper[4949]: I0120 14:50:59.944745 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:50:59Z","lastTransitionTime":"2026-01-20T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048131 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.048315 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.151483 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.254208 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.361256 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.464302 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.567774 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.670461 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.755116 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:18:41.128786152 +0000 UTC Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773811 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773891 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.773903 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.788468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:00 crc kubenswrapper[4949]: E0120 14:51:00.788608 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.877121 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:00 crc kubenswrapper[4949]: I0120 14:51:00.980820 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:00Z","lastTransitionTime":"2026-01-20T14:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.083791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.083947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.083985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.084015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.084037 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186703 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186802 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.186853 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290298 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.290317 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.393371 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.496861 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.599834 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.702142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.756176 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:23:46.188106121 +0000 UTC Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.788955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.788973 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.789094 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:01 crc kubenswrapper[4949]: E0120 14:51:01.789295 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:01 crc kubenswrapper[4949]: E0120 14:51:01.789452 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:01 crc kubenswrapper[4949]: E0120 14:51:01.789632 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.804992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.805134 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908744 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:01 crc kubenswrapper[4949]: I0120 14:51:01.908789 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:01Z","lastTransitionTime":"2026-01-20T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.011841 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.115705 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.219338 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.322238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.425167 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527136 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.527144 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630827 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.630911 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733721 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733780 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.733807 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.756388 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:19:16.123502815 +0000 UTC Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.788244 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:02 crc kubenswrapper[4949]: E0120 14:51:02.788416 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.837308 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:02 crc kubenswrapper[4949]: I0120 14:51:02.940504 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:02Z","lastTransitionTime":"2026-01-20T14:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.043999 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147898 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.147938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.251608 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.353870 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.456648 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.559740 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.662342 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.757463 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:11:15.346498217 +0000 UTC Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766049 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.766164 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.787918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.787956 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.787924 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:03 crc kubenswrapper[4949]: E0120 14:51:03.788115 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:03 crc kubenswrapper[4949]: E0120 14:51:03.788237 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:03 crc kubenswrapper[4949]: E0120 14:51:03.788346 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.868983 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972575 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:03 crc kubenswrapper[4949]: I0120 14:51:03.972591 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:03Z","lastTransitionTime":"2026-01-20T14:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.075898 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.178978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.179001 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.281966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.282087 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.386238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.488885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.488963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.488980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.489010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.489028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.592239 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.695135 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.758085 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:19:09.406606407 +0000 UTC Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.787996 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:04 crc kubenswrapper[4949]: E0120 14:51:04.790252 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.800844 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.811329 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.852142 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.877199 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.896990 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.903833 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:04Z","lastTransitionTime":"2026-01-20T14:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.915731 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.934776 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.955316 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.970710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:04 crc kubenswrapper[4949]: I0120 14:51:04.989798 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:04Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.004926 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008129 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.008149 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.027138 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.045258 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.078192 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110341 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110438 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.110453 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.119625 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.140715 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.163860 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.178548 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.191964 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:05Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.212870 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.316856 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.419710 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.522919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.522975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.522993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.523015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.523029 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625668 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.625727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728542 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728614 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728632 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.728676 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.758419 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:03:16.223046793 +0000 UTC Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.788898 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.788937 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:05 crc kubenswrapper[4949]: E0120 14:51:05.789123 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.789210 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:05 crc kubenswrapper[4949]: E0120 14:51:05.789439 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:05 crc kubenswrapper[4949]: E0120 14:51:05.789542 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.832882 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.937992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:05 crc kubenswrapper[4949]: I0120 14:51:05.938142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:05Z","lastTransitionTime":"2026-01-20T14:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.041139 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144357 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.144377 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.247644 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.350641 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.453591 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.556166 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659088 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659140 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.659201 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.758930 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:45:13.835462301 +0000 UTC Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762370 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.762422 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.788345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:06 crc kubenswrapper[4949]: E0120 14:51:06.788571 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.790019 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:51:06 crc kubenswrapper[4949]: E0120 14:51:06.790465 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.870983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.871020 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.973838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.973980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.974003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.974027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:06 crc kubenswrapper[4949]: I0120 14:51:06.974044 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:06Z","lastTransitionTime":"2026-01-20T14:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.077546 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.180864 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.283465 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387388 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.387447 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.490957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.593980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.594160 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.698867 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.759452 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:51:26.6739738 +0000 UTC Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.788985 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:07 crc kubenswrapper[4949]: E0120 14:51:07.789228 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.789577 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:07 crc kubenswrapper[4949]: E0120 14:51:07.789684 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.789901 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:07 crc kubenswrapper[4949]: E0120 14:51:07.789986 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.802739 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.906948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907032 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907061 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:07 crc kubenswrapper[4949]: I0120 14:51:07.907079 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:07Z","lastTransitionTime":"2026-01-20T14:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.009604 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.112929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.113087 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.215669 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.318694 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.421854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.421995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.422015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.422067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.422087 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.524919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.524975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.524988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.525002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.525013 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.627454 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.730955 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.748403 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.760449 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:09:06.436177582 +0000 UTC Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.762448 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.766735 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.780102 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.783722 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.790616 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.790735 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.796715 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.801796 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.817478 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826132 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.826202 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.842811 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:08Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:08 crc kubenswrapper[4949]: E0120 14:51:08.843088 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844897 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844916 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.844957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947628 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:08 crc kubenswrapper[4949]: I0120 14:51:08.947784 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:08Z","lastTransitionTime":"2026-01-20T14:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051131 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.051159 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154273 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.154320 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256787 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256836 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.256864 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.359962 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.462417 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.564474 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.666989 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.760859 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:47:39.321821647 +0000 UTC Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768970 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.768987 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.788276 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.788289 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:09 crc kubenswrapper[4949]: E0120 14:51:09.788470 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.788298 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:09 crc kubenswrapper[4949]: E0120 14:51:09.788662 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:09 crc kubenswrapper[4949]: E0120 14:51:09.788861 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.872513 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:09 crc kubenswrapper[4949]: I0120 14:51:09.975880 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:09Z","lastTransitionTime":"2026-01-20T14:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.078870 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181279 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.181362 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.284299 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387224 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.387278 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489596 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489614 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489640 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.489657 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.592329 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694711 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.694741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.761947 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:18:02.316334791 +0000 UTC Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.788855 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:10 crc kubenswrapper[4949]: E0120 14:51:10.789085 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796651 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.796727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:10 crc kubenswrapper[4949]: I0120 14:51:10.899326 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:10Z","lastTransitionTime":"2026-01-20T14:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002767 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.002786 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.104933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.104972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.104983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.105000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.105012 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.207728 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310390 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.310426 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413141 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.413240 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.516930 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.619553 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.722880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723155 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.723165 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.762391 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:40:18.615848017 +0000 UTC Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.788889 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.788889 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:11 crc kubenswrapper[4949]: E0120 14:51:11.789014 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.789208 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:11 crc kubenswrapper[4949]: E0120 14:51:11.789230 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:11 crc kubenswrapper[4949]: E0120 14:51:11.789398 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825894 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.825978 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928438 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:11 crc kubenswrapper[4949]: I0120 14:51:11.928468 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:11Z","lastTransitionTime":"2026-01-20T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030454 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030503 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.030553 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133347 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.133445 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.235955 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.235998 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.236011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.236025 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.236035 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.338301 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440789 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.440861 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.543624 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.593409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:12 crc kubenswrapper[4949]: E0120 14:51:12.593635 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:12 crc kubenswrapper[4949]: E0120 14:51:12.593754 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:51:44.593726782 +0000 UTC m=+100.403557680 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646171 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646233 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.646259 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748468 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.748489 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.762982 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:57:28.209116993 +0000 UTC Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.788307 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:12 crc kubenswrapper[4949]: E0120 14:51:12.788459 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.851416 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955426 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:12 crc kubenswrapper[4949]: I0120 14:51:12.955442 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:12Z","lastTransitionTime":"2026-01-20T14:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058123 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.058152 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.160991 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.161014 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.262995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.263014 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365624 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.365902 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.469699 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.571995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.572098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674754 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.674782 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.763246 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:43:30.96277594 +0000 UTC Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.777438 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.788418 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:13 crc kubenswrapper[4949]: E0120 14:51:13.788506 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.788565 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.788599 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:13 crc kubenswrapper[4949]: E0120 14:51:13.788816 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:13 crc kubenswrapper[4949]: E0120 14:51:13.788863 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.801965 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896537 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:13 crc kubenswrapper[4949]: I0120 14:51:13.896566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:13Z","lastTransitionTime":"2026-01-20T14:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.000287 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.103492 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206159 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.206246 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.308837 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.411819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412125 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.412623 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.515329 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617539 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.617618 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.720677 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.764350 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:19:34.639935683 +0000 UTC Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.788097 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:14 crc kubenswrapper[4949]: E0120 14:51:14.788248 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.799666 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.808320 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.817901 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822733 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.822766 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.827835 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.837186 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.848513 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.858248 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.869360 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.878698 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.889613 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.910945 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.923157 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.924266 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:14Z","lastTransitionTime":"2026-01-20T14:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.938053 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.949624 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.975728 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:14 crc kubenswrapper[4949]: I0120 14:51:14.991599 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:14Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.002897 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.019621 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.027438 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.031338 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.130736 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232376 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/0.log" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232419 4949 generic.go:334] "Generic (PLEG): container finished" podID="3ac16078-f295-4f4b-875c-a8505e87b9da" containerID="1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc" exitCode=1 Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232444 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerDied","Data":"1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232730 4949 scope.go:117] "RemoveContainer" containerID="1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.232837 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.241767 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.253568 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.269997 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.294073 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.310463 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.325035 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335285 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335351 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.335412 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.345656 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.359473 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.379087 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.389392 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.402252 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.411400 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.420296 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.429632 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439433 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.439748 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.445451 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.455847 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.470223 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.482834 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.493912 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:15Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.542045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.542310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.543105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.543205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.543291 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645609 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.645799 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747708 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.747737 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.765449 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:26:14.816067549 +0000 UTC Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.787951 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.788007 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:15 crc kubenswrapper[4949]: E0120 14:51:15.788136 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:15 crc kubenswrapper[4949]: E0120 14:51:15.788249 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.788508 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:15 crc kubenswrapper[4949]: E0120 14:51:15.788914 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.850716 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:15 crc kubenswrapper[4949]: I0120 14:51:15.952760 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:15Z","lastTransitionTime":"2026-01-20T14:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.055091 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157463 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.157498 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.237329 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/0.log" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.237388 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259391 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259437 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.259471 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.260024 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.270947 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.282020 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.308245 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.328317 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.348998 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361716 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.361755 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.368568 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.384188 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.399772 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.412209 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.427849 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.447928 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.463497 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465422 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465451 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.465486 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.475462 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.489057 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.504820 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.517629 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.530677 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.542308 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:16Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.568607 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671291 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.671302 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.766387 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:52:52.077377386 +0000 UTC Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773578 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.773925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.774022 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.788823 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:16 crc kubenswrapper[4949]: E0120 14:51:16.788927 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876636 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.876735 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:16 crc kubenswrapper[4949]: I0120 14:51:16.979897 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:16Z","lastTransitionTime":"2026-01-20T14:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.082637 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184761 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184780 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.184821 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287469 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.287592 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.390423 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493143 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493179 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.493211 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.596667 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699088 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.699098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.767311 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:58:17.0517292 +0000 UTC Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.788739 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:17 crc kubenswrapper[4949]: E0120 14:51:17.788906 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.789211 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:17 crc kubenswrapper[4949]: E0120 14:51:17.789305 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.789499 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:17 crc kubenswrapper[4949]: E0120 14:51:17.789643 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.801903 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:17 crc kubenswrapper[4949]: I0120 14:51:17.908318 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:17Z","lastTransitionTime":"2026-01-20T14:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.011833 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115613 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.115705 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.218934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219059 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.219111 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321785 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321852 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.321890 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.424702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.424915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.424977 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.425037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.425098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.527431 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630653 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.630691 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.733498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.733914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.734090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.734286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.734451 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.767889 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 12:27:32.209814616 +0000 UTC Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.788422 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.788666 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.837439 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.929948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.930114 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.949687 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:18Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.955333 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.974197 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:18Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.978984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:18 crc kubenswrapper[4949]: I0120 14:51:18.979008 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:18Z","lastTransitionTime":"2026-01-20T14:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:18 crc kubenswrapper[4949]: E0120 14:51:18.996836 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:18Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001337 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001355 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.001366 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.016343 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:19Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.020585 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.035295 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:19Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.035542 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.037745 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.141468 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.246781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.246910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.247167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.247200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.247214 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.351663 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.456965 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.559993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.560107 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664128 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.664169 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767715 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.767756 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.768493 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:55:13.793438715 +0000 UTC Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.788251 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.788286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.788891 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.788804 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.789264 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:19 crc kubenswrapper[4949]: E0120 14:51:19.789489 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.791925 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.870734 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973228 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973292 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:19 crc kubenswrapper[4949]: I0120 14:51:19.973574 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:19Z","lastTransitionTime":"2026-01-20T14:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.075605 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178251 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.178277 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.252295 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.254045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.255443 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.277460 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280185 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.280212 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.289240 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.301251 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.312191 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.321358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.331640 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.359565 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.369955 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.382361 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.386415 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.401943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.414786 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.427018 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.438654 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.453163 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.465759 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.477968 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484491 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.484501 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.486781 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.503818 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.514786 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:20Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.586803 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.690298 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.769412 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:26:55.126418512 +0000 UTC Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.788017 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:20 crc kubenswrapper[4949]: E0120 14:51:20.788207 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792617 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.792630 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895638 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.895773 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:20 crc kubenswrapper[4949]: I0120 14:51:20.999697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:20.999714 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:20Z","lastTransitionTime":"2026-01-20T14:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.103703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.207641 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310376 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.310388 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413507 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.413553 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.516959 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517017 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.517047 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620020 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.620109 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.723411 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.770489 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:09:43.274764286 +0000 UTC Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.788907 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.788959 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:21 crc kubenswrapper[4949]: E0120 14:51:21.789071 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:21 crc kubenswrapper[4949]: E0120 14:51:21.789269 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.788930 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:21 crc kubenswrapper[4949]: E0120 14:51:21.789870 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.826781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827095 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.827683 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.930568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.930917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.931081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.931231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:21 crc kubenswrapper[4949]: I0120 14:51:21.931360 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:21Z","lastTransitionTime":"2026-01-20T14:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.034597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.034972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.035118 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.035268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.035402 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.138412 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241898 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.241993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.242009 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.263152 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.264361 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/2.log" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.268828 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" exitCode=1 Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.268873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.268941 4949 scope.go:117] "RemoveContainer" containerID="9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.270783 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:51:22 crc kubenswrapper[4949]: E0120 14:51:22.271247 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.292505 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.317615 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.340125 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.349495 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.360082 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.374865 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.388748 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.401844 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.416352 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.431067 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.447808 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.452448 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.463974 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.481253 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.517962 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.533797 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.553354 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555513 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.555593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.566377 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.579127 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.599557 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.622503 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:21Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.574901 6972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.575105 6972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575313 6972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575352 6972 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 14:51:20.575378 6972 factory.go:656] Stopping watch factory\\\\nI0120 14:51:20.575326 6972 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575390 6972 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:51:20.623041 6972 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 14:51:20.623088 6972 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 14:51:20.623175 6972 ovnkube.go:599] Stopped ovnkube\\\\nI0120 14:51:20.623209 6972 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 14:51:20.623316 6972 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:51:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:22Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.658904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.659665 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762422 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.762476 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.771881 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:22:26.329043535 +0000 UTC Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.788694 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:22 crc kubenswrapper[4949]: E0120 14:51:22.788885 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865774 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.865856 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:22 crc kubenswrapper[4949]: I0120 14:51:22.968885 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:22Z","lastTransitionTime":"2026-01-20T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071763 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.071793 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.175909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.176043 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.275426 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.278494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.278800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.279202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.279607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.280089 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384142 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.384206 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.486981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.487001 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.590755 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.694144 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.772732 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:10:17.964845169 +0000 UTC Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.788183 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.788313 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:23 crc kubenswrapper[4949]: E0120 14:51:23.788362 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.788183 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:23 crc kubenswrapper[4949]: E0120 14:51:23.788555 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:23 crc kubenswrapper[4949]: E0120 14:51:23.788715 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796802 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.796843 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:23 crc kubenswrapper[4949]: I0120 14:51:23.900329 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:23Z","lastTransitionTime":"2026-01-20T14:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.003978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.004142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.107414 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.211231 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.314951 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.315052 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417869 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417906 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.417938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.521613 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624858 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.624943 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727888 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.727990 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.773657 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:59:42.208373702 +0000 UTC Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.788059 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:24 crc kubenswrapper[4949]: E0120 14:51:24.788291 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.803810 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-gnfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e8f07d-a71c-4c64-96f3-eecb529c1674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdba13459a0601797d99304e5485d126f34f721486adec97e585f42f8e74cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hxdwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-gnfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.821748 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79431eefe74d507ea9a378ae85e5f4e48a6b21b8b50cd2bf36537babeb2c3d35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5b7xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kgqjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.830950 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.842593 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2szcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac16078-f295-4f4b-875c-a8505e87b9da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:14Z\\\",\\\"message\\\":\\\"2026-01-20T14:50:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e\\\\n2026-01-20T14:50:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_391db4c3-e6b5-405a-9f04-c8bd1882766e to /host/opt/cni/bin/\\\\n2026-01-20T14:50:29Z [verbose] multus-daemon started\\\\n2026-01-20T14:50:29Z [verbose] Readiness Indicator file check\\\\n2026-01-20T14:51:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:51:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9h4l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2szcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.859306 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hlfls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7s46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hlfls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.878852 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94dbb245-cfd5-45cb-bae8-1701500da9e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98ed83ac8e82fc75cf686994e3c96bfb34e63ef42e097f54f2579a20dda339c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f153b48a0871e44db2b1d9f1eaeede33f7443a5ffb52318715a33dd17f9a3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccd805d2045bff65ba7d4c43c4d605241efc48c2433b78dd19a80d42ed88c51f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.898114 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.921000 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac73cb0e27748f3ddbfbe656870691d340eef144fa03e174f3f7e55b0fcb7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.935812 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:24Z","lastTransitionTime":"2026-01-20T14:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.939637 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11dd7a2902cf5943030e03115307571b55caf18a084b2b99b1f4701413f366a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5f463d196cca7f73ac95dfcaca3b70febab8f93d52d556c8402ed1c54edb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.957446 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9957b569-5b87-4d8d-bec2-4a5d4a8b891c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77626d3aae82dbe9d7586e6e2de1f7f0121f069aee294380fdc1bd550aac9780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efeb60c0fba2143cda6592d722ba2cb88afec5c7f253dd5104db692eeabac9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcs88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ghqnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.977495 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b797c60-cbb9-423c-a3be-4bbc519ec6a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ad344cf864afcbd4d4ae2b86f4b37b8bdb9c474a315806fa9e9acf8ea5ad6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ea022b63c3980942acf5d66cc9b094f9861ece5f0500776d02c8fef763f245e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://decd235c8c10ac046b49b484859faf6ad0bd10f0e9111b0f7917c6b2c09e3f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac4834d8d1e841a7ccd5693556063aab85fa3a323b4e2195d0cd813cd19013b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd52c460f32388d2a8f7542fe8bd3bd3e970dcba5c188f8aa2b31010354bfd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b56fbe119db0480f86c24501a0ca1e2fa6e1069cc7d2bed6ad3ac3c3eeeb92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f74ae7dd17e1e8a451a2991a9cd968c2799bbe60a304631dc98a242527ac530\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2d37a8611c519f4a6971540b81172c43064739929ac4c148f6d85658a0b6a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:24 crc kubenswrapper[4949]: I0120 14:51:24.999766 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:24Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.037204 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da08b8e6-19e1-41fa-8e71-2988f3effb27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc9dcbf3b1954b9e2ec8e794b5718d2bb2db69fcdaf4a7e8690816213010dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2108dda6d7a4b269e6ce9201eb2c7928348093d8b3038f71a9dfc60bc087d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49df403b83fddb84c01beb601d032c38d1e88b180f521759d69db5406091cf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba8617bfd9fa969375987d889283b7e00b7fd2d54ddb34e49b8afe2f7085ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a91cc718137c24c5e6166ca20b5f8e69fcf670b64cda3bae943eafcfead912b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c14967fcb76ccf4d14f80ce5626029caa9785da061eec8524cd1d0a1772708\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440bfe564f58939de7ffe4f15e2dcb26148cfa1164fde4636a0dc738f8efe86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xc62j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sqr5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039924 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.039959 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.048454 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hzkk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e6134b2-cfc6-4cd0-bf7d-2d5f4ff543cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38ab25d98dd9717cae243aa2ab2b45e00d72d13904c65fc8cbfeda6280b418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-694ct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hzkk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.058632 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f807e13-b3f0-42cf-ba92-e11ccff28eb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac3ab6fd3560bd11b5d0199f943366396588ec02ceb91fd58979ccff526eafa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667d7c622080aa4a4eb32cb92e0b76b4d479b79cde345b148b51c7f023c79c74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7be865e9e82ff876fe82a7f5f90f8bd547d006934ce8a2aa945834947dd60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4fc6a75db329672e65b9758e8d797b0cab6f3ef5885cab49ce252ad198e7dd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.075684 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e18b54fedb320b78a665e41db7281956d95cc9069edda558df31893e20caa04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:50:50Z\\\",\\\"message\\\":\\\"tified-operators faf75094-01cc-4ebf-8761-1361fa2bf31e 0xc0071f21ed 0xc0071f21ee}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0120 14:50:50.701215 6588 services_controller.go:356] Processing sync for service openshift-marketplace/redhat-marketplace for network=default\\\\nF0120 14:50:50.701220 6588 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T14:51:21Z\\\",\\\"message\\\":\\\"s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.574901 6972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 14:51:20.575105 6972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575313 6972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575352 6972 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 14:51:20.575378 6972 factory.go:656] Stopping watch factory\\\\nI0120 14:51:20.575326 6972 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 14:51:20.575390 6972 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 14:51:20.623041 6972 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0120 14:51:20.623088 6972 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0120 14:51:20.623175 6972 ovnkube.go:599] Stopped ovnkube\\\\nI0120 14:51:20.623209 6972 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0120 14:51:20.623316 6972 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T14:51:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9cmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-z6zd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.088093 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa3bff8-f31e-47b8-a142-3a1c711a9878\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a6e3d5af8c18fc85b76a7efac1d8e453227244d12a73b8ca70a5d1bb39ffc22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41315aefe169a9a7bdb343b994cdc0c33c40618e21d50ad8066464f94bc7d209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.100325 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1290ba-8b84-450b-8b26-3b8e962aef5e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T14:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T14:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T14:50:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.113430 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.126575 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T14:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0955aa99a69fca90b99041167a009392de19d74f7877ce0f5ed19053eea4146f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:25Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.142284 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.243874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.244171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347206 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.347239 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.450355 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553132 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553145 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.553171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654792 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.654869 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.757436 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.774870 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:05:32.295996794 +0000 UTC Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.788473 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.788606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.788488 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:25 crc kubenswrapper[4949]: E0120 14:51:25.788687 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:25 crc kubenswrapper[4949]: E0120 14:51:25.788768 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:25 crc kubenswrapper[4949]: E0120 14:51:25.788931 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.860174 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:25 crc kubenswrapper[4949]: I0120 14:51:25.963116 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:25Z","lastTransitionTime":"2026-01-20T14:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.065741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168596 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168636 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.168681 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270896 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270955 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.270984 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374022 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.374125 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.482387 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585176 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585240 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.585274 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687122 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.687163 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.776091 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:24:14.200377102 +0000 UTC Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.788015 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:26 crc kubenswrapper[4949]: E0120 14:51:26.788129 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.789148 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891473 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891557 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891572 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.891581 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:26 crc kubenswrapper[4949]: I0120 14:51:26.994278 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:26Z","lastTransitionTime":"2026-01-20T14:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.097124 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.199607 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.302908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.302985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.303009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.303039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.303060 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.405860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.406418 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.510236 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.612937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.612987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.612997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.613009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.613018 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716643 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716673 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.716697 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.776596 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:26:55.79771905 +0000 UTC Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.787909 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:27 crc kubenswrapper[4949]: E0120 14:51:27.788086 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.788320 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.788485 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:27 crc kubenswrapper[4949]: E0120 14:51:27.788875 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:27 crc kubenswrapper[4949]: E0120 14:51:27.789037 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.819997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820095 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.820114 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:27 crc kubenswrapper[4949]: I0120 14:51:27.929893 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:27Z","lastTransitionTime":"2026-01-20T14:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032154 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.032193 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.135847 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.238686 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341778 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.341928 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.366679 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.366843 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.366816982 +0000 UTC m=+148.176647870 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445210 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.445341 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468163 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.468195 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468332 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468347 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468365 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468435 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468372 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468471 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468444655 +0000 UTC m=+148.278275513 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468603 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468639 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468610061 +0000 UTC m=+148.278440959 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468681 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468663531 +0000 UTC m=+148.278494399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468371 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468707 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.468734 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 14:52:32.468726133 +0000 UTC m=+148.278557001 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548020 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548132 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.548151 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.650979 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.753948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.754053 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.776811 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:21:58.113111151 +0000 UTC Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.788329 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:28 crc kubenswrapper[4949]: E0120 14:51:28.788514 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857053 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857129 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.857171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:28 crc kubenswrapper[4949]: I0120 14:51:28.961223 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:28Z","lastTransitionTime":"2026-01-20T14:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064370 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.064577 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.167948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.168068 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.271599 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.330490 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.346505 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.351374 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.368620 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.372998 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.389925 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.394437 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.413323 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.417589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.417809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.417947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.418116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.418273 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.437469 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T14:51:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"18da5c89-38cf-46f2-855c-9ee31684d8b7\\\",\\\"systemUUID\\\":\\\"3efd1f11-fa35-4658-a27c-ab73770bda97\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T14:51:29Z is after 2025-08-24T17:21:41Z" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.437674 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441473 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441492 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.441506 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544869 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.544941 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648895 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.648996 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752370 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.752430 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.776974 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:54:22.86653812 +0000 UTC Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.788750 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.788940 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.789272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.789354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.789724 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:29 crc kubenswrapper[4949]: E0120 14:51:29.789754 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.855219 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958388 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:29 crc kubenswrapper[4949]: I0120 14:51:29.958406 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:29Z","lastTransitionTime":"2026-01-20T14:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060643 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.060703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163694 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.163708 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.266960 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.369942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.370062 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474007 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.474138 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576964 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.576987 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.680957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.777155 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:53:46.382270376 +0000 UTC Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783357 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.783433 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.789070 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:30 crc kubenswrapper[4949]: E0120 14:51:30.789339 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.885979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886038 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.886093 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:30 crc kubenswrapper[4949]: I0120 14:51:30.988475 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:30Z","lastTransitionTime":"2026-01-20T14:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091041 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091103 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.091113 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.194409 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297078 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.297138 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.400927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.401110 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.504456 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.608980 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.712107 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.777894 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:47:08.768420384 +0000 UTC Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.788816 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.788861 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:31 crc kubenswrapper[4949]: E0120 14:51:31.789038 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:31 crc kubenswrapper[4949]: E0120 14:51:31.789185 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.789748 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:31 crc kubenswrapper[4949]: E0120 14:51:31.789941 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.815198 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918586 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918792 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:31 crc kubenswrapper[4949]: I0120 14:51:31.918996 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:31Z","lastTransitionTime":"2026-01-20T14:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.022399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.022710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.022927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.023127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.023357 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127170 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.127227 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.229874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.229957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.229984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.230010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.230028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333036 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333140 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.333157 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.435690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436693 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.436850 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540234 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.540282 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.642743 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746503 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.746620 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.778120 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:32:22.260740702 +0000 UTC Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.789355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:32 crc kubenswrapper[4949]: E0120 14:51:32.789628 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.850250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851557 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.851760 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954162 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:32 crc kubenswrapper[4949]: I0120 14:51:32.954207 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:32Z","lastTransitionTime":"2026-01-20T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057829 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.057851 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161119 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.161250 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.264505 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366889 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.366978 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470232 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.470380 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.573957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574013 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.574039 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678127 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.678143 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.779832 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:04:39.853483531 +0000 UTC Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.781793 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.789676 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.789790 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:33 crc kubenswrapper[4949]: E0120 14:51:33.789945 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.790223 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:33 crc kubenswrapper[4949]: E0120 14:51:33.790316 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:33 crc kubenswrapper[4949]: E0120 14:51:33.790631 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.885256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886041 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.886709 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:33 crc kubenswrapper[4949]: I0120 14:51:33.989981 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:33Z","lastTransitionTime":"2026-01-20T14:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092586 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.092681 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.195126 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.297807 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.400335 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503121 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.503196 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.605960 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709505 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.709596 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.781343 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:05:47.089556143 +0000 UTC Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.788777 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:34 crc kubenswrapper[4949]: E0120 14:51:34.789019 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.812313 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.837127 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.837111872 podStartE2EDuration="21.837111872s" podCreationTimestamp="2026-01-20 14:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.836824562 +0000 UTC m=+90.646655420" watchObservedRunningTime="2026-01-20 14:51:34.837111872 +0000 UTC m=+90.646942730" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.861201 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.86116083 podStartE2EDuration="1m11.86116083s" podCreationTimestamp="2026-01-20 14:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.860574451 +0000 UTC m=+90.670405319" watchObservedRunningTime="2026-01-20 14:51:34.86116083 +0000 UTC m=+90.670991698" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.914989 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.915002 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:34Z","lastTransitionTime":"2026-01-20T14:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.915750 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-gnfmv" podStartSLOduration=69.91572757 podStartE2EDuration="1m9.91572757s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.91421595 +0000 UTC m=+90.724046848" watchObservedRunningTime="2026-01-20 14:51:34.91572757 +0000 UTC m=+90.725558448" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.930170 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podStartSLOduration=68.930146922 podStartE2EDuration="1m8.930146922s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.929313966 +0000 UTC m=+90.739144824" watchObservedRunningTime="2026-01-20 14:51:34.930146922 +0000 UTC m=+90.739977790" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.949206 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2szcd" podStartSLOduration=68.949187998 podStartE2EDuration="1m8.949187998s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.946239721 +0000 UTC m=+90.756070589" watchObservedRunningTime="2026-01-20 14:51:34.949187998 +0000 UTC m=+90.759018856" Jan 20 14:51:34 crc kubenswrapper[4949]: I0120 14:51:34.973426 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.973405781 podStartE2EDuration="1m6.973405781s" podCreationTimestamp="2026-01-20 14:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:34.972863283 +0000 UTC m=+90.782694141" watchObservedRunningTime="2026-01-20 14:51:34.973405781 +0000 UTC m=+90.783236649" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017331 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.017416 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.024386 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hzkk7" podStartSLOduration=70.024364013 podStartE2EDuration="1m10.024364013s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.013781786 +0000 UTC m=+90.823612644" watchObservedRunningTime="2026-01-20 14:51:35.024364013 +0000 UTC m=+90.834194881" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.024982 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ghqnb" podStartSLOduration=69.024973813 podStartE2EDuration="1m9.024973813s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.024323302 +0000 UTC m=+90.834154170" watchObservedRunningTime="2026-01-20 14:51:35.024973813 +0000 UTC m=+90.834804671" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.062562 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.062542205 podStartE2EDuration="1m11.062542205s" podCreationTimestamp="2026-01-20 14:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.049549099 +0000 UTC m=+90.859379957" watchObservedRunningTime="2026-01-20 14:51:35.062542205 +0000 UTC m=+90.872373063" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.079889 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-sqr5x" podStartSLOduration=69.079873143 podStartE2EDuration="1m9.079873143s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.079459029 +0000 UTC m=+90.889289897" watchObservedRunningTime="2026-01-20 14:51:35.079873143 +0000 UTC m=+90.889704001" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.091473 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.091452353 podStartE2EDuration="43.091452353s" podCreationTimestamp="2026-01-20 14:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:35.091409872 +0000 UTC m=+90.901240730" watchObservedRunningTime="2026-01-20 14:51:35.091452353 +0000 UTC m=+90.901283221" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119726 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.119765 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222792 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.222808 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324582 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.324590 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.427222 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531581 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.531603 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634897 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.634942 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.737989 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.738013 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.782475 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:59:16.139141243 +0000 UTC Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.788852 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.788952 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.788866 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.789032 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.789197 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.789321 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.790395 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:51:35 crc kubenswrapper[4949]: E0120 14:51:35.790757 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.844806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.845205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.845462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.845776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.846069 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:35 crc kubenswrapper[4949]: I0120 14:51:35.949469 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:35Z","lastTransitionTime":"2026-01-20T14:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.051845 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.154593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.257805 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.365105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.365984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.366008 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.366029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.366042 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.469687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.470498 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.573363 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.676470 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.676908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.677066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.677183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.677325 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782759 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.782777 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.783480 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:31:22.359572997 +0000 UTC Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.789121 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:36 crc kubenswrapper[4949]: E0120 14:51:36.789406 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886148 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.886255 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989232 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:36 crc kubenswrapper[4949]: I0120 14:51:36.989316 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:36Z","lastTransitionTime":"2026-01-20T14:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091638 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091653 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.091684 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194875 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.194887 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.298509 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401124 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.401205 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503955 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503974 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.503988 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606713 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.606761 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.709416 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.784634 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:12:34.24476391 +0000 UTC Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.787940 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.787994 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.788012 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:37 crc kubenswrapper[4949]: E0120 14:51:37.788092 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:37 crc kubenswrapper[4949]: E0120 14:51:37.788242 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:37 crc kubenswrapper[4949]: E0120 14:51:37.788309 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812013 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812094 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.812132 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:37 crc kubenswrapper[4949]: I0120 14:51:37.915443 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:37Z","lastTransitionTime":"2026-01-20T14:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.017901 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.120234 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248811 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.248910 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.352347 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.455323 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557312 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557434 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.557457 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.660988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.661014 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763836 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.763884 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.785623 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:57:57.520568972 +0000 UTC Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.787912 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:38 crc kubenswrapper[4949]: E0120 14:51:38.788062 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.867122 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969473 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:38 crc kubenswrapper[4949]: I0120 14:51:38.969512 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:38Z","lastTransitionTime":"2026-01-20T14:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.072552 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.175888 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.279740 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.389726 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.492940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.492978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.492988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.493007 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.493024 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529247 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.529274 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T14:51:39Z","lastTransitionTime":"2026-01-20T14:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.575498 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2"] Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.575856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.577865 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.578088 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.578253 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.578418 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593186 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5131c40-5bac-4c6a-b498-95560669483a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5131c40-5bac-4c6a-b498-95560669483a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593285 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5131c40-5bac-4c6a-b498-95560669483a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593315 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.593378 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694662 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694748 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5131c40-5bac-4c6a-b498-95560669483a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694789 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5131c40-5bac-4c6a-b498-95560669483a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694801 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5131c40-5bac-4c6a-b498-95560669483a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.694945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.695089 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5131c40-5bac-4c6a-b498-95560669483a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.696932 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5131c40-5bac-4c6a-b498-95560669483a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.708269 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5131c40-5bac-4c6a-b498-95560669483a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.711718 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5131c40-5bac-4c6a-b498-95560669483a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hkkt2\" (UID: \"f5131c40-5bac-4c6a-b498-95560669483a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.786784 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:20:33.24828272 +0000 UTC Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.786860 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.787985 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:39 crc kubenswrapper[4949]: E0120 14:51:39.788083 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.788250 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:39 crc kubenswrapper[4949]: E0120 14:51:39.788302 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.788407 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:39 crc kubenswrapper[4949]: E0120 14:51:39.788451 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.794501 4949 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 14:51:39 crc kubenswrapper[4949]: I0120 14:51:39.890112 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.337684 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" event={"ID":"f5131c40-5bac-4c6a-b498-95560669483a","Type":"ContainerStarted","Data":"0524b07f4739d1105af8cc7d9c2806b90304508d9b18cc3d00b496381d0f09af"} Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.337754 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" event={"ID":"f5131c40-5bac-4c6a-b498-95560669483a","Type":"ContainerStarted","Data":"628a7ae05a3a0834d9c35c526003957c1142c4942f8ad33c2a1d3d39e887ec68"} Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.359752 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hkkt2" podStartSLOduration=75.35972937 podStartE2EDuration="1m15.35972937s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:51:40.358890753 +0000 UTC m=+96.168721611" watchObservedRunningTime="2026-01-20 14:51:40.35972937 +0000 UTC m=+96.169560268" Jan 20 14:51:40 crc kubenswrapper[4949]: I0120 14:51:40.788991 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:40 crc kubenswrapper[4949]: E0120 14:51:40.789483 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:41 crc kubenswrapper[4949]: I0120 14:51:41.788576 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:41 crc kubenswrapper[4949]: I0120 14:51:41.788675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:41 crc kubenswrapper[4949]: I0120 14:51:41.788675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:41 crc kubenswrapper[4949]: E0120 14:51:41.788849 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:41 crc kubenswrapper[4949]: E0120 14:51:41.789114 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:41 crc kubenswrapper[4949]: E0120 14:51:41.789299 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:42 crc kubenswrapper[4949]: I0120 14:51:42.788740 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:42 crc kubenswrapper[4949]: E0120 14:51:42.788898 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:43 crc kubenswrapper[4949]: I0120 14:51:43.788872 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:43 crc kubenswrapper[4949]: I0120 14:51:43.788988 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:43 crc kubenswrapper[4949]: E0120 14:51:43.789037 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:43 crc kubenswrapper[4949]: E0120 14:51:43.789171 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:43 crc kubenswrapper[4949]: I0120 14:51:43.789253 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:43 crc kubenswrapper[4949]: E0120 14:51:43.789338 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:44 crc kubenswrapper[4949]: I0120 14:51:44.644837 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:44 crc kubenswrapper[4949]: E0120 14:51:44.645155 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:44 crc kubenswrapper[4949]: E0120 14:51:44.645279 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs podName:fa4eae9d-b492-4fd3-8baf-38ed726d9e4c nodeName:}" failed. No retries permitted until 2026-01-20 14:52:48.645246797 +0000 UTC m=+164.455077715 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs") pod "network-metrics-daemon-hlfls" (UID: "fa4eae9d-b492-4fd3-8baf-38ed726d9e4c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 14:51:44 crc kubenswrapper[4949]: I0120 14:51:44.788287 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:44 crc kubenswrapper[4949]: E0120 14:51:44.790732 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:45 crc kubenswrapper[4949]: I0120 14:51:45.788359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:45 crc kubenswrapper[4949]: I0120 14:51:45.788438 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:45 crc kubenswrapper[4949]: E0120 14:51:45.788484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:45 crc kubenswrapper[4949]: I0120 14:51:45.788502 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:45 crc kubenswrapper[4949]: E0120 14:51:45.788592 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:45 crc kubenswrapper[4949]: E0120 14:51:45.788690 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:46 crc kubenswrapper[4949]: I0120 14:51:46.788314 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:46 crc kubenswrapper[4949]: E0120 14:51:46.788570 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:47 crc kubenswrapper[4949]: I0120 14:51:47.788675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:47 crc kubenswrapper[4949]: I0120 14:51:47.788803 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:47 crc kubenswrapper[4949]: I0120 14:51:47.789656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:47 crc kubenswrapper[4949]: E0120 14:51:47.789868 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:47 crc kubenswrapper[4949]: E0120 14:51:47.790163 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:47 crc kubenswrapper[4949]: E0120 14:51:47.790226 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:48 crc kubenswrapper[4949]: I0120 14:51:48.788829 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:48 crc kubenswrapper[4949]: E0120 14:51:48.789043 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:48 crc kubenswrapper[4949]: I0120 14:51:48.790472 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:51:48 crc kubenswrapper[4949]: E0120 14:51:48.790812 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-z6zd5_openshift-ovn-kubernetes(775d7cfb-d5e3-457d-a7fa-4f0bdb752d04)\"" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" Jan 20 14:51:49 crc kubenswrapper[4949]: I0120 14:51:49.788810 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:49 crc kubenswrapper[4949]: I0120 14:51:49.788885 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:49 crc kubenswrapper[4949]: I0120 14:51:49.788834 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:49 crc kubenswrapper[4949]: E0120 14:51:49.789994 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:49 crc kubenswrapper[4949]: E0120 14:51:49.790388 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:49 crc kubenswrapper[4949]: E0120 14:51:49.790717 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:50 crc kubenswrapper[4949]: I0120 14:51:50.789061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:50 crc kubenswrapper[4949]: E0120 14:51:50.789467 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:51 crc kubenswrapper[4949]: I0120 14:51:51.788185 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:51 crc kubenswrapper[4949]: I0120 14:51:51.788233 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:51 crc kubenswrapper[4949]: I0120 14:51:51.788225 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:51 crc kubenswrapper[4949]: E0120 14:51:51.788703 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:51 crc kubenswrapper[4949]: E0120 14:51:51.789092 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:51 crc kubenswrapper[4949]: E0120 14:51:51.789212 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:52 crc kubenswrapper[4949]: I0120 14:51:52.788992 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:52 crc kubenswrapper[4949]: E0120 14:51:52.789233 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:53 crc kubenswrapper[4949]: I0120 14:51:53.788188 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:53 crc kubenswrapper[4949]: I0120 14:51:53.788219 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:53 crc kubenswrapper[4949]: E0120 14:51:53.788389 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:53 crc kubenswrapper[4949]: E0120 14:51:53.788557 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:53 crc kubenswrapper[4949]: I0120 14:51:53.788218 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:53 crc kubenswrapper[4949]: E0120 14:51:53.789089 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:54 crc kubenswrapper[4949]: I0120 14:51:54.787989 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:54 crc kubenswrapper[4949]: E0120 14:51:54.790157 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:55 crc kubenswrapper[4949]: I0120 14:51:55.787853 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:55 crc kubenswrapper[4949]: I0120 14:51:55.787925 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:55 crc kubenswrapper[4949]: I0120 14:51:55.788059 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:55 crc kubenswrapper[4949]: E0120 14:51:55.788202 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:55 crc kubenswrapper[4949]: E0120 14:51:55.789261 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:55 crc kubenswrapper[4949]: E0120 14:51:55.789380 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:56 crc kubenswrapper[4949]: I0120 14:51:56.788627 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:56 crc kubenswrapper[4949]: E0120 14:51:56.788862 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:57 crc kubenswrapper[4949]: I0120 14:51:57.788665 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:57 crc kubenswrapper[4949]: I0120 14:51:57.788704 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:57 crc kubenswrapper[4949]: E0120 14:51:57.788822 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:57 crc kubenswrapper[4949]: I0120 14:51:57.788867 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:57 crc kubenswrapper[4949]: E0120 14:51:57.789045 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:51:57 crc kubenswrapper[4949]: E0120 14:51:57.789228 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:58 crc kubenswrapper[4949]: I0120 14:51:58.788800 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:51:58 crc kubenswrapper[4949]: E0120 14:51:58.788976 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:51:59 crc kubenswrapper[4949]: I0120 14:51:59.788612 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:51:59 crc kubenswrapper[4949]: I0120 14:51:59.788693 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:51:59 crc kubenswrapper[4949]: E0120 14:51:59.788975 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:51:59 crc kubenswrapper[4949]: E0120 14:51:59.789076 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:51:59 crc kubenswrapper[4949]: I0120 14:51:59.788715 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:51:59 crc kubenswrapper[4949]: E0120 14:51:59.789851 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:00 crc kubenswrapper[4949]: I0120 14:52:00.789054 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:00 crc kubenswrapper[4949]: E0120 14:52:00.789243 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.418071 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.418843 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/0.log" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.419048 4949 generic.go:334] "Generic (PLEG): container finished" podID="3ac16078-f295-4f4b-875c-a8505e87b9da" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" exitCode=1 Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.419150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerDied","Data":"2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1"} Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.419621 4949 scope.go:117] "RemoveContainer" containerID="1a7224bd401206c64ad9b1d86819bda2f8bdbb13907a3f28c38b516ebe4b15cc" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.420579 4949 scope.go:117] "RemoveContainer" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.421007 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2szcd_openshift-multus(3ac16078-f295-4f4b-875c-a8505e87b9da)\"" pod="openshift-multus/multus-2szcd" podUID="3ac16078-f295-4f4b-875c-a8505e87b9da" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.788913 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.789011 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:01 crc kubenswrapper[4949]: I0120 14:52:01.789035 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.789176 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.789280 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:01 crc kubenswrapper[4949]: E0120 14:52:01.789430 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:02 crc kubenswrapper[4949]: I0120 14:52:02.425273 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 14:52:02 crc kubenswrapper[4949]: I0120 14:52:02.788135 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:02 crc kubenswrapper[4949]: E0120 14:52:02.788273 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:02 crc kubenswrapper[4949]: I0120 14:52:02.789553 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.430817 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.433336 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerStarted","Data":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.433776 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.639176 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podStartSLOduration=97.639143568 podStartE2EDuration="1m37.639143568s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:03.466109632 +0000 UTC m=+119.275940510" watchObservedRunningTime="2026-01-20 14:52:03.639143568 +0000 UTC m=+119.448974476" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.639702 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hlfls"] Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.639876 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.640108 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.788856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.788988 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:03 crc kubenswrapper[4949]: I0120 14:52:03.788856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.789072 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.789230 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:03 crc kubenswrapper[4949]: E0120 14:52:03.789441 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:04 crc kubenswrapper[4949]: E0120 14:52:04.774599 4949 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 20 14:52:04 crc kubenswrapper[4949]: I0120 14:52:04.788851 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:04 crc kubenswrapper[4949]: E0120 14:52:04.790866 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:04 crc kubenswrapper[4949]: E0120 14:52:04.880295 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 14:52:05 crc kubenswrapper[4949]: I0120 14:52:05.788955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:05 crc kubenswrapper[4949]: I0120 14:52:05.789013 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:05 crc kubenswrapper[4949]: E0120 14:52:05.789709 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:05 crc kubenswrapper[4949]: I0120 14:52:05.789013 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:05 crc kubenswrapper[4949]: E0120 14:52:05.789858 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:05 crc kubenswrapper[4949]: E0120 14:52:05.789935 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:06 crc kubenswrapper[4949]: I0120 14:52:06.789085 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:06 crc kubenswrapper[4949]: E0120 14:52:06.789271 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:07 crc kubenswrapper[4949]: I0120 14:52:07.788410 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:07 crc kubenswrapper[4949]: E0120 14:52:07.788566 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:07 crc kubenswrapper[4949]: I0120 14:52:07.788663 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:07 crc kubenswrapper[4949]: I0120 14:52:07.788685 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:07 crc kubenswrapper[4949]: E0120 14:52:07.788848 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:07 crc kubenswrapper[4949]: E0120 14:52:07.788997 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:08 crc kubenswrapper[4949]: I0120 14:52:08.788335 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:08 crc kubenswrapper[4949]: E0120 14:52:08.788504 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:09 crc kubenswrapper[4949]: I0120 14:52:09.788241 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:09 crc kubenswrapper[4949]: I0120 14:52:09.788379 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.788612 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:09 crc kubenswrapper[4949]: I0120 14:52:09.788682 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.788751 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.788382 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:09 crc kubenswrapper[4949]: E0120 14:52:09.881400 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 14:52:10 crc kubenswrapper[4949]: I0120 14:52:10.788283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:10 crc kubenswrapper[4949]: E0120 14:52:10.788404 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:11 crc kubenswrapper[4949]: I0120 14:52:11.788388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:11 crc kubenswrapper[4949]: I0120 14:52:11.788388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:11 crc kubenswrapper[4949]: E0120 14:52:11.788603 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:11 crc kubenswrapper[4949]: I0120 14:52:11.788421 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:11 crc kubenswrapper[4949]: E0120 14:52:11.788768 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:11 crc kubenswrapper[4949]: E0120 14:52:11.788832 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:12 crc kubenswrapper[4949]: I0120 14:52:12.789119 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:12 crc kubenswrapper[4949]: E0120 14:52:12.789331 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:13 crc kubenswrapper[4949]: I0120 14:52:13.788510 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:13 crc kubenswrapper[4949]: I0120 14:52:13.788608 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:13 crc kubenswrapper[4949]: I0120 14:52:13.788653 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:13 crc kubenswrapper[4949]: E0120 14:52:13.788777 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:13 crc kubenswrapper[4949]: E0120 14:52:13.788872 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:13 crc kubenswrapper[4949]: E0120 14:52:13.789000 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:14 crc kubenswrapper[4949]: I0120 14:52:14.790882 4949 scope.go:117] "RemoveContainer" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" Jan 20 14:52:14 crc kubenswrapper[4949]: I0120 14:52:14.794117 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:14 crc kubenswrapper[4949]: E0120 14:52:14.794489 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:14 crc kubenswrapper[4949]: E0120 14:52:14.882632 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.500448 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.500558 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470"} Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.788457 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.788472 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:15 crc kubenswrapper[4949]: E0120 14:52:15.788687 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:15 crc kubenswrapper[4949]: I0120 14:52:15.788472 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:15 crc kubenswrapper[4949]: E0120 14:52:15.788814 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:15 crc kubenswrapper[4949]: E0120 14:52:15.788867 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:16 crc kubenswrapper[4949]: I0120 14:52:16.788212 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:16 crc kubenswrapper[4949]: E0120 14:52:16.788410 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:17 crc kubenswrapper[4949]: I0120 14:52:17.788550 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:17 crc kubenswrapper[4949]: I0120 14:52:17.788606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:17 crc kubenswrapper[4949]: E0120 14:52:17.788692 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:17 crc kubenswrapper[4949]: I0120 14:52:17.788770 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:17 crc kubenswrapper[4949]: E0120 14:52:17.788861 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:17 crc kubenswrapper[4949]: E0120 14:52:17.789018 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:18 crc kubenswrapper[4949]: I0120 14:52:18.788887 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:18 crc kubenswrapper[4949]: E0120 14:52:18.789102 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hlfls" podUID="fa4eae9d-b492-4fd3-8baf-38ed726d9e4c" Jan 20 14:52:19 crc kubenswrapper[4949]: I0120 14:52:19.788283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:19 crc kubenswrapper[4949]: I0120 14:52:19.788299 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:19 crc kubenswrapper[4949]: E0120 14:52:19.788547 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 14:52:19 crc kubenswrapper[4949]: E0120 14:52:19.788606 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 14:52:19 crc kubenswrapper[4949]: I0120 14:52:19.788892 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:19 crc kubenswrapper[4949]: E0120 14:52:19.789156 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.191369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.230276 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.231076 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.231443 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tsmsl"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.231868 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.232310 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.232928 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.233209 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.233720 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.236339 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4xbn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.236797 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r9kf7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.237266 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.237835 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.238411 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.238891 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.252995 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.254567 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266801 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmgqx\" (UniqueName: \"kubernetes.io/projected/7f69495e-a17d-4493-b598-99c2fc9afee7-kube-api-access-nmgqx\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266833 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266860 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-machine-approver-tls\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266885 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-config\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266912 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqn7b\" (UniqueName: \"kubernetes.io/projected/cc07a381-955f-47a2-89ab-59985f08e602-kube-api-access-tqn7b\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266941 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-node-pullsecrets\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.266965 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-client\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-image-import-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267026 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267048 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267075 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f69495e-a17d-4493-b598-99c2fc9afee7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267111 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267178 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267199 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267227 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-images\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267255 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267283 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkst\" (UniqueName: \"kubernetes.io/projected/1c433a7c-ae2d-4320-b456-58b37bdd5f22-kube-api-access-sjkst\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267333 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267371 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc07a381-955f-47a2-89ab-59985f08e602-serving-cert\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267397 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267422 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267451 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg6v\" (UniqueName: \"kubernetes.io/projected/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-kube-api-access-gdg6v\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267500 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-encryption-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267553 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-serving-cert\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267596 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-serving-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267731 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4949c\" (UniqueName: \"kubernetes.io/projected/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-kube-api-access-4949c\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-config\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.267952 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit-dir\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.268006 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.268113 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-auth-proxy-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.271014 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.271678 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.271966 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.272435 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.272742 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.272785 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.273802 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.283656 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.283919 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284059 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284405 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284570 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284674 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284730 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284826 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284929 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285044 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285118 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285195 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285251 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.284615 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285374 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285396 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285474 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285490 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285556 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285640 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285123 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285325 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285345 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285323 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285857 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285972 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286110 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286188 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286275 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286405 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286628 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286414 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.286963 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287149 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287333 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287551 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287744 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.287945 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.288181 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.288421 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.288674 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.290690 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bb9s9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.293785 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mlc47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.294249 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.285492 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.295360 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.292561 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.298561 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300058 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300149 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300270 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300409 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300062 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.300955 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301092 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301373 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301631 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m8sd9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.301681 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.306405 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.307129 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.307813 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.307976 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.308404 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.309069 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.309148 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.309206 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.310315 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.310474 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.310729 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.312762 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.314595 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.314632 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.314698 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316007 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316099 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316176 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316401 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316472 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316406 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316563 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316629 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.316703 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317153 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317458 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kncwj"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317744 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r9dfg"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317808 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.317982 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.319817 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320130 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320697 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320785 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320790 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320803 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.320941 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.323084 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.329458 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.333177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.333695 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.334044 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.334532 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.334696 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336585 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336721 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336862 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.336990 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.337116 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.338063 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.338307 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.338472 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339006 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339066 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339129 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339217 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339279 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.339641 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.340265 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.340430 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.340821 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.341014 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.345824 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.347977 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.349181 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.351210 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.352354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.378266 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.379720 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.381634 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.385420 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.385846 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.386680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.386869 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387623 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4949c\" (UniqueName: \"kubernetes.io/projected/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-kube-api-access-4949c\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387769 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-config\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387829 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit-dir\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387853 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-auth-proxy-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387919 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387942 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmgqx\" (UniqueName: \"kubernetes.io/projected/7f69495e-a17d-4493-b598-99c2fc9afee7-kube-api-access-nmgqx\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387965 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-machine-approver-tls\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-config\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388029 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqn7b\" (UniqueName: \"kubernetes.io/projected/cc07a381-955f-47a2-89ab-59985f08e602-kube-api-access-tqn7b\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388052 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-node-pullsecrets\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-client\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-image-import-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388125 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388150 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388173 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f69495e-a17d-4493-b598-99c2fc9afee7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388198 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388238 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388258 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388279 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-images\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388299 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388320 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388343 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkst\" (UniqueName: \"kubernetes.io/projected/1c433a7c-ae2d-4320-b456-58b37bdd5f22-kube-api-access-sjkst\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388398 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc07a381-955f-47a2-89ab-59985f08e602-serving-cert\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388402 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-node-pullsecrets\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388441 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388470 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdg6v\" (UniqueName: \"kubernetes.io/projected/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-kube-api-access-gdg6v\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388492 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388541 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-encryption-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-serving-cert\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.388604 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-serving-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.391269 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.391474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-serving-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.391488 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.393909 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qhn47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.394681 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tsmsl"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.394715 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.394846 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395004 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395239 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395600 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1c433a7c-ae2d-4320-b456-58b37bdd5f22-audit-dir\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.385858 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.395899 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396011 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396143 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396349 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.398494 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-service-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.398656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.396372 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-images\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.399058 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-auth-proxy-config\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.399601 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.399808 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.400329 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.400382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f69495e-a17d-4493-b598-99c2fc9afee7-config\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387461 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.400786 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.387534 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.411918 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.407457 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-image-import-ca\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.414645 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49lg4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.414846 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415055 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415143 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-etcd-client\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415270 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415581 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-27qdj"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.407724 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc07a381-955f-47a2-89ab-59985f08e602-serving-cert\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.415992 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.416176 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f69495e-a17d-4493-b598-99c2fc9afee7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.416530 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-encryption-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417212 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417804 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-machine-approver-tls\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.417852 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.418382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.419131 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc07a381-955f-47a2-89ab-59985f08e602-config\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.421866 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.422481 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.422585 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423004 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423407 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zvfr4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.423497 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.424199 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.424318 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.424628 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.430170 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r9kf7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.430249 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.433554 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.433669 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.433861 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.436778 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8mlj4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.437903 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.438023 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.439950 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mlc47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.460268 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.460321 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.460344 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bb9s9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.466730 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.472936 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.475958 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.478979 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49lg4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.479450 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.486848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490357 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490396 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490423 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4787cfd3-62d3-494b-94c9-e01ff459c73b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490441 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490467 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnvjf\" (UniqueName: \"kubernetes.io/projected/4787cfd3-62d3-494b-94c9-e01ff459c73b-kube-api-access-mnvjf\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/056f1862-446a-4aa9-9a9f-f09463c32dab-trusted-ca\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490537 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8md\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-kube-api-access-rc8md\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490554 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-encryption-config\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490582 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/056f1862-446a-4aa9-9a9f-f09463c32dab-metrics-tls\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490600 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-policies\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-serving-cert\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490634 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6b5b\" (UniqueName: \"kubernetes.io/projected/113494fa-baf7-4f60-9a9c-e8c8d6abb146-kube-api-access-v6b5b\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490655 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-client\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490730 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d68754d4-260b-460e-a34e-3d4a7313e4eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490764 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-dir\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490794 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68754d4-260b-460e-a34e-3d4a7313e4eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.490836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68754d4-260b-460e-a34e-3d4a7313e4eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.491165 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4787cfd3-62d3-494b-94c9-e01ff459c73b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.491183 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.493569 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.493626 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.493643 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m8sd9"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.494832 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.495693 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4xbn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.497098 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.497696 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.498970 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.499656 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.500705 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r9dfg"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.501805 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-27qdj"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.502879 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.503848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.504844 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.505821 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.506804 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-j8fgh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.507985 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.508246 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.509163 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pcdvd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.510233 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.510286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.511346 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.512260 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zvfr4"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.513205 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qhn47"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.514211 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.515242 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.516313 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pcdvd"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.517261 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.518243 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j8fgh"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.519202 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lvqj5"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.520551 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lvqj5"] Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.520666 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.523988 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.538245 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.554288 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.573920 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.591953 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68754d4-260b-460e-a34e-3d4a7313e4eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4787cfd3-62d3-494b-94c9-e01ff459c73b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592040 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592128 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592165 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4787cfd3-62d3-494b-94c9-e01ff459c73b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592191 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592761 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnvjf\" (UniqueName: \"kubernetes.io/projected/4787cfd3-62d3-494b-94c9-e01ff459c73b-kube-api-access-mnvjf\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.592801 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/056f1862-446a-4aa9-9a9f-f09463c32dab-trusted-ca\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.593387 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594013 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594381 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-encryption-config\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594602 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8md\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-kube-api-access-rc8md\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/056f1862-446a-4aa9-9a9f-f09463c32dab-metrics-tls\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594834 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-policies\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594957 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.595592 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/056f1862-446a-4aa9-9a9f-f09463c32dab-trusted-ca\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.595866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-policies\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.594902 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-serving-cert\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596213 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6b5b\" (UniqueName: \"kubernetes.io/projected/113494fa-baf7-4f60-9a9c-e8c8d6abb146-kube-api-access-v6b5b\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596340 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-client\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596455 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.596966 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d68754d4-260b-460e-a34e-3d4a7313e4eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.597088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-dir\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.597137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68754d4-260b-460e-a34e-3d4a7313e4eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.599112 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/113494fa-baf7-4f60-9a9c-e8c8d6abb146-audit-dir\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.599534 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-serving-cert\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.601689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/056f1862-446a-4aa9-9a9f-f09463c32dab-metrics-tls\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.603749 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-encryption-config\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.607323 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.614429 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.634381 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.654236 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.674264 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.694049 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.706141 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4787cfd3-62d3-494b-94c9-e01ff459c73b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.714017 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.723723 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4787cfd3-62d3-494b-94c9-e01ff459c73b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.733555 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.753030 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.772548 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.788657 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.793095 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.814048 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.832818 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.854168 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.873628 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.894443 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.913820 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.934443 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.953685 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.974058 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.988964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68754d4-260b-460e-a34e-3d4a7313e4eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.993658 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.996363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c433a7c-ae2d-4320-b456-58b37bdd5f22-serving-cert\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:20 crc kubenswrapper[4949]: I0120 14:52:20.996575 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c433a7c-ae2d-4320-b456-58b37bdd5f22-config\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.013810 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.018437 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68754d4-260b-460e-a34e-3d4a7313e4eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.034193 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.053646 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.074207 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.075005 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.078125 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/113494fa-baf7-4f60-9a9c-e8c8d6abb146-etcd-client\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.114600 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.133269 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.153670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.173215 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.194122 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.253210 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4949c\" (UniqueName: \"kubernetes.io/projected/b1e03d2d-d9d6-4cf8-9339-ec325b99453d-kube-api-access-4949c\") pod \"openshift-apiserver-operator-796bbdcf4f-kzsc7\" (UID: \"b1e03d2d-d9d6-4cf8-9339-ec325b99453d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.269028 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"controller-manager-879f6c89f-fz5x4\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.292749 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkst\" (UniqueName: \"kubernetes.io/projected/1c433a7c-ae2d-4320-b456-58b37bdd5f22-kube-api-access-sjkst\") pod \"apiserver-76f77b778f-r9kf7\" (UID: \"1c433a7c-ae2d-4320-b456-58b37bdd5f22\") " pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.308619 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"route-controller-manager-6576b87f9c-zc5vv\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.313749 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.334663 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.345725 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.354189 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.373581 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.394131 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.411852 4949 request.go:700] Waited for 1.014910136s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.413630 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.434603 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.454819 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.457106 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.473552 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.494087 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.501945 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.510730 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.518135 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.535228 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.554940 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.569891 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r9kf7"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.575489 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.608974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdg6v\" (UniqueName: \"kubernetes.io/projected/1ad22095-966c-4fe7-8fb2-4caa9bf87d1a-kube-api-access-gdg6v\") pod \"machine-approver-56656f9798-6l9js\" (UID: \"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.614249 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.637297 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.661484 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.683149 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmgqx\" (UniqueName: \"kubernetes.io/projected/7f69495e-a17d-4493-b598-99c2fc9afee7-kube-api-access-nmgqx\") pod \"machine-api-operator-5694c8668f-tsmsl\" (UID: \"7f69495e-a17d-4493-b598-99c2fc9afee7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.689805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqn7b\" (UniqueName: \"kubernetes.io/projected/cc07a381-955f-47a2-89ab-59985f08e602-kube-api-access-tqn7b\") pod \"authentication-operator-69f744f599-l4xbn\" (UID: \"cc07a381-955f-47a2-89ab-59985f08e602\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.694980 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.694702 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.743125 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.743541 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 14:52:21 crc kubenswrapper[4949]: W0120 14:52:21.744421 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1e03d2d_d9d6_4cf8_9339_ec325b99453d.slice/crio-ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628 WatchSource:0}: Error finding container ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628: Status 404 returned error can't find the container with id ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628 Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.753361 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.773675 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.774287 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.775178 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.788320 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.788336 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.788580 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.793726 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: W0120 14:52:21.795065 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6278caf6_b4d9_414c_99ed_686de2b23a80.slice/crio-7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70 WatchSource:0}: Error finding container 7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70: Status 404 returned error can't find the container with id 7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70 Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.813439 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.818015 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.833959 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.853938 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.874091 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.893426 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.894523 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.918434 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.939309 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.951358 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tsmsl"] Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.953878 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 14:52:21 crc kubenswrapper[4949]: W0120 14:52:21.963117 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f69495e_a17d_4493_b598_99c2fc9afee7.slice/crio-548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e WatchSource:0}: Error finding container 548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e: Status 404 returned error can't find the container with id 548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.976907 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 14:52:21 crc kubenswrapper[4949]: I0120 14:52:21.996013 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.015079 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.032917 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.054192 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.073466 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.077569 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-l4xbn"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.093580 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.113396 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.132992 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 14:52:22 crc kubenswrapper[4949]: W0120 14:52:22.139454 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc07a381_955f_47a2_89ab_59985f08e602.slice/crio-53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf WatchSource:0}: Error finding container 53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf: Status 404 returned error can't find the container with id 53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.153374 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.172842 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.192897 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.214256 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.233605 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.253922 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.274081 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.293426 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.313771 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.333914 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.353330 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.373376 4949 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.393915 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.413711 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.431993 4949 request.go:700] Waited for 1.838282479s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.452494 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.471191 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnvjf\" (UniqueName: \"kubernetes.io/projected/4787cfd3-62d3-494b-94c9-e01ff459c73b-kube-api-access-mnvjf\") pod \"openshift-controller-manager-operator-756b6f6bc6-r8xkq\" (UID: \"4787cfd3-62d3-494b-94c9-e01ff459c73b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.487762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8md\" (UniqueName: \"kubernetes.io/projected/056f1862-446a-4aa9-9a9f-f09463c32dab-kube-api-access-rc8md\") pod \"ingress-operator-5b745b69d9-9ggcd\" (UID: \"056f1862-446a-4aa9-9a9f-f09463c32dab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.507584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6b5b\" (UniqueName: \"kubernetes.io/projected/113494fa-baf7-4f60-9a9c-e8c8d6abb146-kube-api-access-v6b5b\") pod \"apiserver-7bbb656c7d-c8phr\" (UID: \"113494fa-baf7-4f60-9a9c-e8c8d6abb146\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.527345 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d68754d4-260b-460e-a34e-3d4a7313e4eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xvbft\" (UID: \"d68754d4-260b-460e-a34e-3d4a7313e4eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.535844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" event={"ID":"cc07a381-955f-47a2-89ab-59985f08e602","Type":"ContainerStarted","Data":"29f094912f73a29db8ae50237a19e53739f670dd3fdc4b70fd4d6162582373d7"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.535906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" event={"ID":"cc07a381-955f-47a2-89ab-59985f08e602","Type":"ContainerStarted","Data":"53df08ff968cf006c7c2fe0424ef60e8698e75f42ffc1de3f6ee4271ea5e2faf"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.537855 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" event={"ID":"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a","Type":"ContainerStarted","Data":"3cbb47949f3611a106c55ae3c09b569b29efc89c5be14377d0c73f8a9b8e6291"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.537881 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" event={"ID":"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a","Type":"ContainerStarted","Data":"cacaa10e06e80f06e91c6ed729042af0bf81d186d1f76f05404c95fc95246ac8"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.537891 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" event={"ID":"1ad22095-966c-4fe7-8fb2-4caa9bf87d1a","Type":"ContainerStarted","Data":"1fe3b01b1d19669fae768fefd6ed9eb0085b09f78d7896defb976e88a14da08d"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.542073 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerStarted","Data":"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.542144 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerStarted","Data":"6cdd7178026b2587db50c95fe7c40688b8e05cd993d070aa0db4f3a3e9c38e1f"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.542377 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.543969 4949 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zc5vv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.543994 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerDied","Data":"032d13aa9eab6efaa4e137f542fc4683fbdc6793665d1b5b8603afa609d985c6"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.544020 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.543960 4949 generic.go:334] "Generic (PLEG): container finished" podID="1c433a7c-ae2d-4320-b456-58b37bdd5f22" containerID="032d13aa9eab6efaa4e137f542fc4683fbdc6793665d1b5b8603afa609d985c6" exitCode=0 Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.544122 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerStarted","Data":"0efa6db78e84cb5227720027d0f377bcda2d44ee51afedbcff4784bee253dd91"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.546483 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" event={"ID":"7f69495e-a17d-4493-b598-99c2fc9afee7","Type":"ContainerStarted","Data":"d7e19dd5252c402931923f8d46dd74c4df0cbd1f7c164cb2e90cd291ed391050"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.546847 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" event={"ID":"7f69495e-a17d-4493-b598-99c2fc9afee7","Type":"ContainerStarted","Data":"a4bb167e555bbaf94d569377a87cc211a2c59e6ad892f948644dfd910cf08394"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.546864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" event={"ID":"7f69495e-a17d-4493-b598-99c2fc9afee7","Type":"ContainerStarted","Data":"548ba1a0c76cc74a6f3b697dd70bba68370d5e62f316b04d328022149a43027e"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.547762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"marketplace-operator-79b997595-ntmdh\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.548411 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerStarted","Data":"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.548456 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.548470 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerStarted","Data":"7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.549855 4949 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fz5x4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.549902 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.550738 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" event={"ID":"b1e03d2d-d9d6-4cf8-9339-ec325b99453d","Type":"ContainerStarted","Data":"988323478fe4a63f7c97a04f3239dbdd83a78967a23ef021f317a613dcd00a7e"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.550780 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" event={"ID":"b1e03d2d-d9d6-4cf8-9339-ec325b99453d","Type":"ContainerStarted","Data":"ff90c5ee5e38f531c48f7735d37fa8892c79a5efadb24237ca14916ed1bb0628"} Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.553056 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.574138 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.575578 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.623039 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.634359 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652565 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tws9\" (UniqueName: \"kubernetes.io/projected/8516de03-2f1a-43e7-8af0-116378f96b8f-kube-api-access-7tws9\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652631 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgh7\" (UniqueName: \"kubernetes.io/projected/18aa9682-4716-4c4f-a53e-cc2f312c7c16-kube-api-access-jjgh7\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652662 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqdg\" (UniqueName: \"kubernetes.io/projected/182137c4-babb-4c69-b53d-d37131c3041a-kube-api-access-ksqdg\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652727 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8169cee8-7942-4c7f-92bd-f89e4b027b83-proxy-tls\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652761 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnxqp\" (UniqueName: \"kubernetes.io/projected/fe950de2-c48d-481b-a5fc-c943fe124904-kube-api-access-wnxqp\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652791 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-images\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652855 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-trusted-ca\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652885 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-metrics-certs\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652904 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-config\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwz2\" (UniqueName: \"kubernetes.io/projected/abb60fa1-1584-4837-890f-888754026b25-kube-api-access-vxwz2\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652959 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.652991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.653012 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-serving-cert\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654092 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/182137c4-babb-4c69-b53d-d37131c3041a-metrics-tls\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654114 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654172 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8516de03-2f1a-43e7-8af0-116378f96b8f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654195 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/abb60fa1-1584-4837-890f-888754026b25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654242 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-client\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654335 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe950de2-c48d-481b-a5fc-c943fe124904-service-ca-bundle\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzvg7\" (UniqueName: \"kubernetes.io/projected/e45c974f-4645-4895-9f73-cfd03e798e00-kube-api-access-nzvg7\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654448 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654488 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-config\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654592 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654627 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654676 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654789 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654829 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f451eb2-597d-47c6-aa10-66a79776f101-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654829 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654869 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18aa9682-4716-4c4f-a53e-cc2f312c7c16-serving-cert\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.654934 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655046 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655161 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655236 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655413 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655444 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655610 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/abb60fa1-1584-4837-890f-888754026b25-proxy-tls\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655696 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655726 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7fm\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-kube-api-access-sq7fm\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655744 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655769 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-config\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655797 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655821 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f451eb2-597d-47c6-aa10-66a79776f101-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655848 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655868 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655911 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnm5\" (UniqueName: \"kubernetes.io/projected/8169cee8-7942-4c7f-92bd-f89e4b027b83-kube-api-access-ntnm5\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.655977 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656029 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656051 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-stats-auth\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656073 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656105 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656128 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725zb\" (UniqueName: \"kubernetes.io/projected/33ca7885-743f-48cd-b3ba-80f9a1f8cf85-kube-api-access-725zb\") pod \"downloads-7954f5f757-bb9s9\" (UID: \"33ca7885-743f-48cd-b3ba-80f9a1f8cf85\") " pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656213 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-service-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.656236 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-default-certificate\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.656397 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.156386454 +0000 UTC m=+138.966217312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.678964 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.683436 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.694319 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.721616 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.757399 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.257378981 +0000 UTC m=+139.067209839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757430 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-srv-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757481 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-serving-cert\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757508 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/182137c4-babb-4c69-b53d-d37131c3041a-metrics-tls\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757555 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8516de03-2f1a-43e7-8af0-116378f96b8f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757579 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/abb60fa1-1584-4837-890f-888754026b25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757638 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-mountpoint-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757711 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757740 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-client\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757762 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe950de2-c48d-481b-a5fc-c943fe124904-service-ca-bundle\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzvg7\" (UniqueName: \"kubernetes.io/projected/e45c974f-4645-4895-9f73-cfd03e798e00-kube-api-access-nzvg7\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757871 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757894 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-config\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.757992 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6pz\" (UniqueName: \"kubernetes.io/projected/27518978-3cb4-4732-bc84-13abfa7e9c81-kube-api-access-lw6pz\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92sl\" (UniqueName: \"kubernetes.io/projected/1a0cc344-c778-44a2-a6f6-e2067286c347-kube-api-access-v92sl\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758074 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758122 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqsfd\" (UniqueName: \"kubernetes.io/projected/c47ecb6d-9ecf-480f-b605-4dd91e900521-kube-api-access-hqsfd\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758145 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f451eb2-597d-47c6-aa10-66a79776f101-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758183 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94z48\" (UniqueName: \"kubernetes.io/projected/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-kube-api-access-94z48\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758204 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27518978-3cb4-4732-bc84-13abfa7e9c81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758231 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-registration-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758257 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758282 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bc7\" (UniqueName: \"kubernetes.io/projected/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-kube-api-access-j8bc7\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758308 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18aa9682-4716-4c4f-a53e-cc2f312c7c16-serving-cert\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758364 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758391 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95c38c39-62f0-4343-9628-5070d8cc10b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758414 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9d1a76-4686-40ae-8b09-e66126088926-cert\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758434 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758469 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-tmpfs\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758497 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a47ded-8ed0-4c5c-8e53-2ff63413b679-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-plugins-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758733 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvpj7\" (UniqueName: \"kubernetes.io/projected/10228b44-2c32-4fab-a4f9-c703ef0b6b39-kube-api-access-cvpj7\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758761 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758786 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758825 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758850 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758891 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27518978-3cb4-4732-bc84-13abfa7e9c81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758915 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8351da-e624-4d42-be80-14e2c90c57f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758952 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.758975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/980ff476-0915-44c2-8665-41d9074e3763-signing-key\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759003 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759028 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-webhook-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759092 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-csi-data-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759113 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/980ff476-0915-44c2-8665-41d9074e3763-signing-cabundle\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759135 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/abb60fa1-1584-4837-890f-888754026b25-proxy-tls\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759159 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759182 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-apiservice-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759206 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7fm\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-kube-api-access-sq7fm\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-config\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759276 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f451eb2-597d-47c6-aa10-66a79776f101-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759312 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-socket-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a0cc344-c778-44a2-a6f6-e2067286c347-config-volume\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759382 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759404 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759428 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759455 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnm5\" (UniqueName: \"kubernetes.io/projected/8169cee8-7942-4c7f-92bd-f89e4b027b83-kube-api-access-ntnm5\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xq6s\" (UniqueName: \"kubernetes.io/projected/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-kube-api-access-9xq6s\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759509 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759575 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759601 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759623 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-stats-auth\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759646 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759754 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-725zb\" (UniqueName: \"kubernetes.io/projected/33ca7885-743f-48cd-b3ba-80f9a1f8cf85-kube-api-access-725zb\") pod \"downloads-7954f5f757-bb9s9\" (UID: \"33ca7885-743f-48cd-b3ba-80f9a1f8cf85\") " pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759780 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759804 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-node-bootstrap-token\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-service-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-default-certificate\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-config\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759934 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/980ff476-0915-44c2-8665-41d9074e3763-kube-api-access-k5jkz\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759962 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tws9\" (UniqueName: \"kubernetes.io/projected/8516de03-2f1a-43e7-8af0-116378f96b8f-kube-api-access-7tws9\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.759985 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lmn\" (UniqueName: \"kubernetes.io/projected/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-kube-api-access-d5lmn\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8351da-e624-4d42-be80-14e2c90c57f4-config\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760057 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86q6\" (UniqueName: \"kubernetes.io/projected/3fae0085-f1fb-44ed-b871-0e6fe5072006-kube-api-access-f86q6\") pod \"migrator-59844c95c7-vr5fk\" (UID: \"3fae0085-f1fb-44ed-b871-0e6fe5072006\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760095 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgh7\" (UniqueName: \"kubernetes.io/projected/18aa9682-4716-4c4f-a53e-cc2f312c7c16-kube-api-access-jjgh7\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760131 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760156 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqdg\" (UniqueName: \"kubernetes.io/projected/182137c4-babb-4c69-b53d-d37131c3041a-kube-api-access-ksqdg\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760195 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstf7\" (UniqueName: \"kubernetes.io/projected/95c38c39-62f0-4343-9628-5070d8cc10b7-kube-api-access-hstf7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760222 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-certs\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760246 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8169cee8-7942-4c7f-92bd-f89e4b027b83-proxy-tls\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqd92\" (UniqueName: \"kubernetes.io/projected/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-kube-api-access-bqd92\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760320 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnxqp\" (UniqueName: \"kubernetes.io/projected/fe950de2-c48d-481b-a5fc-c943fe124904-kube-api-access-wnxqp\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760347 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-images\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-trusted-ca\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760434 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-metrics-certs\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.760488 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwz2\" (UniqueName: \"kubernetes.io/projected/abb60fa1-1584-4837-890f-888754026b25-kube-api-access-vxwz2\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762117 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762157 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-serving-cert\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-config\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762248 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95s74\" (UniqueName: \"kubernetes.io/projected/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-kube-api-access-95s74\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762272 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-srv-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762332 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762370 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.762392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0cc344-c778-44a2-a6f6-e2067286c347-metrics-tls\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.763262 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.765421 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/abb60fa1-1584-4837-890f-888754026b25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.765794 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.767539 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8516de03-2f1a-43e7-8af0-116378f96b8f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.767727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.769218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwhx\" (UniqueName: \"kubernetes.io/projected/8b9d1a76-4686-40ae-8b09-e66126088926-kube-api-access-9lwhx\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.769623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-config\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.769732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-images\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.770044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-service-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.770061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18aa9682-4716-4c4f-a53e-cc2f312c7c16-trusted-ca\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.770795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe950de2-c48d-481b-a5fc-c943fe124904-service-ca-bundle\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.771504 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.773665 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f451eb2-597d-47c6-aa10-66a79776f101-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.774917 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.775928 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.775994 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rw4\" (UniqueName: \"kubernetes.io/projected/97a47ded-8ed0-4c5c-8e53-2ff63413b679-kube-api-access-l8rw4\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776021 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-config\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776131 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8351da-e624-4d42-be80-14e2c90c57f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776163 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c47ecb6d-9ecf-480f-b605-4dd91e900521-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.776759 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.777627 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8169cee8-7942-4c7f-92bd-f89e4b027b83-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.778719 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.780283 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/abb60fa1-1584-4837-890f-888754026b25-proxy-tls\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.781080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.782838 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.783255 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-config\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.783918 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.784815 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-ca\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.785246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.785333 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.285317744 +0000 UTC m=+139.095148792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.788945 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.801117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.801321 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18aa9682-4716-4c4f-a53e-cc2f312c7c16-serving-cert\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.802767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.802793 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.803229 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.825341 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/182137c4-babb-4c69-b53d-d37131c3041a-metrics-tls\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.825398 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.839842 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.839990 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-stats-auth\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.841042 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-default-certificate\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.841161 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.845293 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.846249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f451eb2-597d-47c6-aa10-66a79776f101-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.846886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847322 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-etcd-client\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45c974f-4645-4895-9f73-cfd03e798e00-serving-cert\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.847601 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.848460 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe950de2-c48d-481b-a5fc-c943fe124904-metrics-certs\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.850743 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.851258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.851865 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnxqp\" (UniqueName: \"kubernetes.io/projected/fe950de2-c48d-481b-a5fc-c943fe124904-kube-api-access-wnxqp\") pod \"router-default-5444994796-kncwj\" (UID: \"fe950de2-c48d-481b-a5fc-c943fe124904\") " pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.852381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.854249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwz2\" (UniqueName: \"kubernetes.io/projected/abb60fa1-1584-4837-890f-888754026b25-kube-api-access-vxwz2\") pod \"machine-config-controller-84d6567774-vzltk\" (UID: \"abb60fa1-1584-4837-890f-888754026b25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.866663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8169cee8-7942-4c7f-92bd-f89e4b027b83-proxy-tls\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.866982 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgh7\" (UniqueName: \"kubernetes.io/projected/18aa9682-4716-4c4f-a53e-cc2f312c7c16-kube-api-access-jjgh7\") pod \"console-operator-58897d9998-mlc47\" (UID: \"18aa9682-4716-4c4f-a53e-cc2f312c7c16\") " pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.878543 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.879851 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880201 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c47ecb6d-9ecf-480f-b605-4dd91e900521-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880237 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-srv-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-mountpoint-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880330 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw6pz\" (UniqueName: \"kubernetes.io/projected/27518978-3cb4-4732-bc84-13abfa7e9c81-kube-api-access-lw6pz\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880407 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92sl\" (UniqueName: \"kubernetes.io/projected/1a0cc344-c778-44a2-a6f6-e2067286c347-kube-api-access-v92sl\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880432 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqsfd\" (UniqueName: \"kubernetes.io/projected/c47ecb6d-9ecf-480f-b605-4dd91e900521-kube-api-access-hqsfd\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880459 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94z48\" (UniqueName: \"kubernetes.io/projected/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-kube-api-access-94z48\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880476 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27518978-3cb4-4732-bc84-13abfa7e9c81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880496 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-registration-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880535 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bc7\" (UniqueName: \"kubernetes.io/projected/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-kube-api-access-j8bc7\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880558 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9d1a76-4686-40ae-8b09-e66126088926-cert\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880581 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880621 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95c38c39-62f0-4343-9628-5070d8cc10b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880649 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-tmpfs\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880669 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a47ded-8ed0-4c5c-8e53-2ff63413b679-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880691 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-plugins-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880718 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvpj7\" (UniqueName: \"kubernetes.io/projected/10228b44-2c32-4fab-a4f9-c703ef0b6b39-kube-api-access-cvpj7\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880757 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880774 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27518978-3cb4-4732-bc84-13abfa7e9c81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880794 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8351da-e624-4d42-be80-14e2c90c57f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880816 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/980ff476-0915-44c2-8665-41d9074e3763-signing-key\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880857 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-webhook-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-csi-data-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880896 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/980ff476-0915-44c2-8665-41d9074e3763-signing-cabundle\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-apiservice-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-socket-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880963 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a0cc344-c778-44a2-a6f6-e2067286c347-config-volume\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.880989 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881010 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xq6s\" (UniqueName: \"kubernetes.io/projected/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-kube-api-access-9xq6s\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-node-bootstrap-token\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881110 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lmn\" (UniqueName: \"kubernetes.io/projected/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-kube-api-access-d5lmn\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8351da-e624-4d42-be80-14e2c90c57f4-config\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881154 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-config\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881173 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/980ff476-0915-44c2-8665-41d9074e3763-kube-api-access-k5jkz\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881228 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f86q6\" (UniqueName: \"kubernetes.io/projected/3fae0085-f1fb-44ed-b871-0e6fe5072006-kube-api-access-f86q6\") pod \"migrator-59844c95c7-vr5fk\" (UID: \"3fae0085-f1fb-44ed-b871-0e6fe5072006\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881257 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hstf7\" (UniqueName: \"kubernetes.io/projected/95c38c39-62f0-4343-9628-5070d8cc10b7-kube-api-access-hstf7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881274 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-certs\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqd92\" (UniqueName: \"kubernetes.io/projected/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-kube-api-access-bqd92\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881337 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-serving-cert\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881364 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95s74\" (UniqueName: \"kubernetes.io/projected/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-kube-api-access-95s74\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881405 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-srv-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881423 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0cc344-c778-44a2-a6f6-e2067286c347-metrics-tls\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881440 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwhx\" (UniqueName: \"kubernetes.io/projected/8b9d1a76-4686-40ae-8b09-e66126088926-kube-api-access-9lwhx\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881465 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881485 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rw4\" (UniqueName: \"kubernetes.io/projected/97a47ded-8ed0-4c5c-8e53-2ff63413b679-kube-api-access-l8rw4\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.881535 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8351da-e624-4d42-be80-14e2c90c57f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.882489 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.382460298 +0000 UTC m=+139.192291336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.884763 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.884830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-mountpoint-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.885021 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-plugins-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.885674 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-srv-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.885754 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-registration-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.886779 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27518978-3cb4-4732-bc84-13abfa7e9c81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.887046 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.887225 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-csi-data-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.890744 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8351da-e624-4d42-be80-14e2c90c57f4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.890929 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8351da-e624-4d42-be80-14e2c90c57f4-config\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.891080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b9d1a76-4686-40ae-8b09-e66126088926-cert\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.891469 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-tmpfs\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.891787 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a0cc344-c778-44a2-a6f6-e2067286c347-config-volume\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.892122 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-socket-dir\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.892832 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-config\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.896149 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c47ecb6d-9ecf-480f-b605-4dd91e900521-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.904316 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-webhook-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.904936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10228b44-2c32-4fab-a4f9-c703ef0b6b39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.905500 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.908646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/980ff476-0915-44c2-8665-41d9074e3763-signing-key\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.909470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-serving-cert\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.912932 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.912937 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.913722 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1a0cc344-c778-44a2-a6f6-e2067286c347-metrics-tls\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.914879 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/95c38c39-62f0-4343-9628-5070d8cc10b7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.915050 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.915745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqdg\" (UniqueName: \"kubernetes.io/projected/182137c4-babb-4c69-b53d-d37131c3041a-kube-api-access-ksqdg\") pod \"dns-operator-744455d44c-r9dfg\" (UID: \"182137c4-babb-4c69-b53d-d37131c3041a\") " pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.916268 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27518978-3cb4-4732-bc84-13abfa7e9c81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.916366 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97a47ded-8ed0-4c5c-8e53-2ff63413b679-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.917398 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-node-bootstrap-token\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.929184 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/980ff476-0915-44c2-8665-41d9074e3763-signing-cabundle\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.930665 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-apiservice-cert\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.931121 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-certs\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.931354 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-srv-cert\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.937783 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"oauth-openshift-558db77b4-brlp7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.952177 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tws9\" (UniqueName: \"kubernetes.io/projected/8516de03-2f1a-43e7-8af0-116378f96b8f-kube-api-access-7tws9\") pod \"cluster-samples-operator-665b6dd947-5j28t\" (UID: \"8516de03-2f1a-43e7-8af0-116378f96b8f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.953266 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7fm\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-kube-api-access-sq7fm\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.970802 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.973266 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.978826 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-btnmm\" (UID: \"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.980733 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd"] Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.987622 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:22 crc kubenswrapper[4949]: E0120 14:52:22.988191 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.488168328 +0000 UTC m=+139.297999186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:22 crc kubenswrapper[4949]: I0120 14:52:22.992234 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.012084 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.028281 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.040566 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"console-f9d7485db-w9d9r\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.042866 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.044887 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.051690 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnm5\" (UniqueName: \"kubernetes.io/projected/8169cee8-7942-4c7f-92bd-f89e4b027b83-kube-api-access-ntnm5\") pod \"machine-config-operator-74547568cd-lzfzd\" (UID: \"8169cee8-7942-4c7f-92bd-f89e4b027b83\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.099205 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.099978 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.599954517 +0000 UTC m=+139.409785375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.112312 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f451eb2-597d-47c6-aa10-66a79776f101-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cx9z8\" (UID: \"4f451eb2-597d-47c6-aa10-66a79776f101\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.112673 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzvg7\" (UniqueName: \"kubernetes.io/projected/e45c974f-4645-4895-9f73-cfd03e798e00-kube-api-access-nzvg7\") pod \"etcd-operator-b45778765-m8sd9\" (UID: \"e45c974f-4645-4895-9f73-cfd03e798e00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:23 crc kubenswrapper[4949]: W0120 14:52:23.131822 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe950de2_c48d_481b_a5fc_c943fe124904.slice/crio-e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed WatchSource:0}: Error finding container e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed: Status 404 returned error can't find the container with id e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.138298 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-725zb\" (UniqueName: \"kubernetes.io/projected/33ca7885-743f-48cd-b3ba-80f9a1f8cf85-kube-api-access-725zb\") pod \"downloads-7954f5f757-bb9s9\" (UID: \"33ca7885-743f-48cd-b3ba-80f9a1f8cf85\") " pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.145934 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.149998 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bc7\" (UniqueName: \"kubernetes.io/projected/ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0-kube-api-access-j8bc7\") pod \"catalog-operator-68c6474976-mtrqm\" (UID: \"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.156708 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92sl\" (UniqueName: \"kubernetes.io/projected/1a0cc344-c778-44a2-a6f6-e2067286c347-kube-api-access-v92sl\") pod \"dns-default-j8fgh\" (UID: \"1a0cc344-c778-44a2-a6f6-e2067286c347\") " pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.161598 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.186960 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.197087 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqsfd\" (UniqueName: \"kubernetes.io/projected/c47ecb6d-9ecf-480f-b605-4dd91e900521-kube-api-access-hqsfd\") pod \"multus-admission-controller-857f4d67dd-zvfr4\" (UID: \"c47ecb6d-9ecf-480f-b605-4dd91e900521\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.198401 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94z48\" (UniqueName: \"kubernetes.io/projected/e025b5f3-9c92-4c5a-906b-3b1c6e9fe612-kube-api-access-94z48\") pod \"machine-config-server-8mlj4\" (UID: \"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612\") " pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.201730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.202275 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.70225893 +0000 UTC m=+139.512089788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.205595 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.213001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.217764 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw6pz\" (UniqueName: \"kubernetes.io/projected/27518978-3cb4-4732-bc84-13abfa7e9c81-kube-api-access-lw6pz\") pod \"kube-storage-version-migrator-operator-b67b599dd-5rgxn\" (UID: \"27518978-3cb4-4732-bc84-13abfa7e9c81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.231903 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.237807 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvpj7\" (UniqueName: \"kubernetes.io/projected/10228b44-2c32-4fab-a4f9-c703ef0b6b39-kube-api-access-cvpj7\") pod \"olm-operator-6b444d44fb-xxm4k\" (UID: \"10228b44-2c32-4fab-a4f9-c703ef0b6b39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.237839 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.256108 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5jkz\" (UniqueName: \"kubernetes.io/projected/980ff476-0915-44c2-8665-41d9074e3763-kube-api-access-k5jkz\") pod \"service-ca-9c57cc56f-27qdj\" (UID: \"980ff476-0915-44c2-8665-41d9074e3763\") " pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.262482 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.265046 4949 csr.go:261] certificate signing request csr-xfdr5 is approved, waiting to be issued Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.277046 4949 csr.go:257] certificate signing request csr-xfdr5 is issued Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.278397 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xq6s\" (UniqueName: \"kubernetes.io/projected/01bfc821-a8ed-4dbd-a5b1-fa6659a6499f-kube-api-access-9xq6s\") pod \"openshift-config-operator-7777fb866f-qhn47\" (UID: \"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.296343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"collect-profiles-29482005-wzsk7\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.303942 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.304343 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.804325174 +0000 UTC m=+139.614156032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.325123 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8351da-e624-4d42-be80-14e2c90c57f4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hfg2r\" (UID: \"8b8351da-e624-4d42-be80-14e2c90c57f4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.337488 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.347152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f86q6\" (UniqueName: \"kubernetes.io/projected/3fae0085-f1fb-44ed-b871-0e6fe5072006-kube-api-access-f86q6\") pod \"migrator-59844c95c7-vr5fk\" (UID: \"3fae0085-f1fb-44ed-b871-0e6fe5072006\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.355854 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.361102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lmn\" (UniqueName: \"kubernetes.io/projected/541edc44-7cd7-4c73-a5eb-48e2f5fd69b3-kube-api-access-d5lmn\") pod \"csi-hostpathplugin-lvqj5\" (UID: \"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3\") " pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.366248 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.380785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.383549 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.395257 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.403986 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.409355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95s74\" (UniqueName: \"kubernetes.io/projected/dd8570b5-67a9-4655-bc3e-c36bb6d5c646-kube-api-access-95s74\") pod \"service-ca-operator-777779d784-49lg4\" (UID: \"dd8570b5-67a9-4655-bc3e-c36bb6d5c646\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.419923 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.424024 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hstf7\" (UniqueName: \"kubernetes.io/projected/95c38c39-62f0-4343-9628-5070d8cc10b7-kube-api-access-hstf7\") pod \"control-plane-machine-set-operator-78cbb6b69f-d5t2m\" (UID: \"95c38c39-62f0-4343-9628-5070d8cc10b7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.430204 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.430250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqd92\" (UniqueName: \"kubernetes.io/projected/0b89af20-11f6-4e88-8b1c-5e5ff5b47a70-kube-api-access-bqd92\") pod \"packageserver-d55dfcdfc-qw6xk\" (UID: \"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.430629 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.431120 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.431662 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:23.931646829 +0000 UTC m=+139.741477687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.439896 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.445955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.453176 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwhx\" (UniqueName: \"kubernetes.io/projected/8b9d1a76-4686-40ae-8b09-e66126088926-kube-api-access-9lwhx\") pod \"ingress-canary-pcdvd\" (UID: \"8b9d1a76-4686-40ae-8b09-e66126088926\") " pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.455481 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mlj4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.463863 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rw4\" (UniqueName: \"kubernetes.io/projected/97a47ded-8ed0-4c5c-8e53-2ff63413b679-kube-api-access-l8rw4\") pod \"package-server-manager-789f6589d5-5g8xw\" (UID: \"97a47ded-8ed0-4c5c-8e53-2ff63413b679\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.478030 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pcdvd" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.482734 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mlc47"] Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.485331 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.532217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.532996 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.032973768 +0000 UTC m=+139.842804626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.610253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" event={"ID":"4787cfd3-62d3-494b-94c9-e01ff459c73b","Type":"ContainerStarted","Data":"de8de3cd355a157b6f12ebaf21f381d323b906d872b3cd525050c22abe0d6fc9"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.614732 4949 generic.go:334] "Generic (PLEG): container finished" podID="113494fa-baf7-4f60-9a9c-e8c8d6abb146" containerID="6dd90ff4602a759742ef4b5a2644cd0cdfca0356f129ec4fe9db3642443bbb47" exitCode=0 Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.614918 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" event={"ID":"113494fa-baf7-4f60-9a9c-e8c8d6abb146","Type":"ContainerDied","Data":"6dd90ff4602a759742ef4b5a2644cd0cdfca0356f129ec4fe9db3642443bbb47"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.614975 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" event={"ID":"113494fa-baf7-4f60-9a9c-e8c8d6abb146","Type":"ContainerStarted","Data":"f8927f80d6ab352d739f767e829fde753ee7ce0e9e4f6d680a1d142bff2c6486"} Jan 20 14:52:23 crc kubenswrapper[4949]: W0120 14:52:23.617141 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45bacc20_7998_4250_bbd3_fd1d24741ea7.slice/crio-d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f WatchSource:0}: Error finding container d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f: Status 404 returned error can't find the container with id d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.619876 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerStarted","Data":"89ef8c049a23525f1cf29b7a9fc2a7b7e5865ef9e7c772deacf73e7e50d98951"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.635166 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.635434 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.135421286 +0000 UTC m=+139.945252144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.641041 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kncwj" event={"ID":"fe950de2-c48d-481b-a5fc-c943fe124904","Type":"ContainerStarted","Data":"e43f3f41b732b58041804723b1968d401596dc22c11f71bd7f7cc5ece56711ed"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.650255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" event={"ID":"056f1862-446a-4aa9-9a9f-f09463c32dab","Type":"ContainerStarted","Data":"34db3022f9b412d703645b40f9bf19b7323168b6725466186d6c6ed041a5565e"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.650329 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" event={"ID":"056f1862-446a-4aa9-9a9f-f09463c32dab","Type":"ContainerStarted","Data":"5c7c895fc6170ae616c1d3a919cf726add0d80c0bc182f19860276579ad4027c"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.657255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" event={"ID":"d68754d4-260b-460e-a34e-3d4a7313e4eb","Type":"ContainerStarted","Data":"e8b3ba99b2adcb5fddecf3546d6e23c4fb88d6189319ccaed8fb80f7d25879d3"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.658270 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.666591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerStarted","Data":"37fb91e24d9502fca7001a77a1082aa104b29a70445d3ced18d4a89d50594cce"} Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.688439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.697709 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.737331 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.738172 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.238153333 +0000 UTC m=+140.047984191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.839916 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.840945 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk"] Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.841935 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.341920276 +0000 UTC m=+140.151751124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.842985 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.843146 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:52:23 crc kubenswrapper[4949]: I0120 14:52:23.942151 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:23 crc kubenswrapper[4949]: E0120 14:52:23.942678 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.442649145 +0000 UTC m=+140.252480003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: W0120 14:52:24.027123 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb60fa1_1584_4837_890f_888754026b25.slice/crio-3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b WatchSource:0}: Error finding container 3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b: Status 404 returned error can't find the container with id 3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.048233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.048875 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.548831401 +0000 UTC m=+140.358662259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.149462 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.149936 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.649918782 +0000 UTC m=+140.459749630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.251320 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.251695 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.751677086 +0000 UTC m=+140.561507944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.278075 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-20 14:47:23 +0000 UTC, rotation deadline is 2026-12-01 19:16:32.592785638 +0000 UTC Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.278148 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7564h24m8.31464197s for next certificate rotation Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.352057 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.352782 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.852746746 +0000 UTC m=+140.662577604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.352938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.353279 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.853267314 +0000 UTC m=+140.663098172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.456093 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.456417 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:24.956392175 +0000 UTC m=+140.766223023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.475228 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kzsc7" podStartSLOduration=119.475214773 podStartE2EDuration="1m59.475214773s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.473878768 +0000 UTC m=+140.283709616" watchObservedRunningTime="2026-01-20 14:52:24.475214773 +0000 UTC m=+140.285045631" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.523474 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" podStartSLOduration=118.523458255 podStartE2EDuration="1m58.523458255s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.522989138 +0000 UTC m=+140.332819996" watchObservedRunningTime="2026-01-20 14:52:24.523458255 +0000 UTC m=+140.333289113" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.560293 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.560906 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.060891234 +0000 UTC m=+140.870722092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.589553 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-l4xbn" podStartSLOduration=119.58953572 podStartE2EDuration="1m59.58953572s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.587602103 +0000 UTC m=+140.397432961" watchObservedRunningTime="2026-01-20 14:52:24.58953572 +0000 UTC m=+140.399366578" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.661701 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.661801 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.161776017 +0000 UTC m=+140.971606865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.662595 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.663076 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.163063562 +0000 UTC m=+140.972894420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.715317 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" event={"ID":"abb60fa1-1584-4837-890f-888754026b25","Type":"ContainerStarted","Data":"313b85b5ccfbfed37d1a387f50fba0506960708f5ba0dc437a92da338c5cdcb4"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.715373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" event={"ID":"abb60fa1-1584-4837-890f-888754026b25","Type":"ContainerStarted","Data":"3cefe03f6ab84d7b1f69aec95a5eab9e707cfe9a7c1ad3ef0e8833707dce591b"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.743779 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mlc47" event={"ID":"18aa9682-4716-4c4f-a53e-cc2f312c7c16","Type":"ContainerStarted","Data":"103c82e6c589c90e8fe9b44bc61894c75a4c6cee3c97bd9e018892e90099c9b0"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.743830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mlc47" event={"ID":"18aa9682-4716-4c4f-a53e-cc2f312c7c16","Type":"ContainerStarted","Data":"1b6dcf23e704d43129b2db26c4cf71de5e420dff45438308b5dad9c933b766fe"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.746976 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.754753 4949 patch_prober.go:28] interesting pod/console-operator-58897d9998-mlc47 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.754835 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mlc47" podUID="18aa9682-4716-4c4f-a53e-cc2f312c7c16" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.756588 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mlj4" event={"ID":"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612","Type":"ContainerStarted","Data":"274f6429d9c5e4de1a3f70212f4fa396a3f9e051f079899d894b87f554cde5b9"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.756646 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mlj4" event={"ID":"e025b5f3-9c92-4c5a-906b-3b1c6e9fe612","Type":"ContainerStarted","Data":"2ef016eb38eac852c6d5d9d0475d714eb7957dc2f9f4690b1ec8b1137fb0c550"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.763290 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.763758 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.263741338 +0000 UTC m=+141.073572196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.798852 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6l9js" podStartSLOduration=119.798833317 podStartE2EDuration="1m59.798833317s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.794157296 +0000 UTC m=+140.603988144" watchObservedRunningTime="2026-01-20 14:52:24.798833317 +0000 UTC m=+140.608664175" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.840685 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" event={"ID":"1c433a7c-ae2d-4320-b456-58b37bdd5f22","Type":"ContainerStarted","Data":"3a3927b0fda226b916d27dde6f796567a78a6aa8b521909f2498a5f9dfe07b43"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.840740 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kncwj" event={"ID":"fe950de2-c48d-481b-a5fc-c943fe124904","Type":"ContainerStarted","Data":"64eedbd12fdc30e368c677be1cc0d7773d4eadd64d6ba8314af72550c4ede168"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.844932 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" event={"ID":"056f1862-446a-4aa9-9a9f-f09463c32dab","Type":"ContainerStarted","Data":"2afde7b0c1485178ee0e392a0a09c156aed7d4bca491bf9dcf9c5f07f63fa01d"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.852758 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" event={"ID":"d68754d4-260b-460e-a34e-3d4a7313e4eb","Type":"ContainerStarted","Data":"57d3ab7a6f55a7babed716d2ff54de05a138792ed7b9c74742a6f900f0326e85"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.854248 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerStarted","Data":"7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.854778 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.855460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" event={"ID":"4787cfd3-62d3-494b-94c9-e01ff459c73b","Type":"ContainerStarted","Data":"a5947e5bffea3a9e57650043923778e63b2387b31984bac0d10ed9bfde92bcb6"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.857169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerStarted","Data":"d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f"} Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.859543 4949 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ntmdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.859586 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.864802 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.865054 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.365043137 +0000 UTC m=+141.174873995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.869026 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" podStartSLOduration=118.869005193 podStartE2EDuration="1m58.869005193s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:24.841021569 +0000 UTC m=+140.650852437" watchObservedRunningTime="2026-01-20 14:52:24.869005193 +0000 UTC m=+140.678836051" Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.967556 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:24 crc kubenswrapper[4949]: E0120 14:52:24.968889 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.468869152 +0000 UTC m=+141.278700010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:24 crc kubenswrapper[4949]: I0120 14:52:24.996754 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.072006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.072295 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.572284613 +0000 UTC m=+141.382115471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.116492 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tsmsl" podStartSLOduration=119.116479525 podStartE2EDuration="1m59.116479525s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.112980644 +0000 UTC m=+140.922811502" watchObservedRunningTime="2026-01-20 14:52:25.116479525 +0000 UTC m=+140.926310383" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.172825 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.172987 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.67296166 +0000 UTC m=+141.482792528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.173063 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.173426 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.673412475 +0000 UTC m=+141.483243333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.273891 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.274240 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.774225237 +0000 UTC m=+141.584056095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.276529 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:25 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:25 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:25 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.276578 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.376913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.377351 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.877336327 +0000 UTC m=+141.687167205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.392475 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xvbft" podStartSLOduration=119.392446247 podStartE2EDuration="1m59.392446247s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.390237371 +0000 UTC m=+141.200068239" watchObservedRunningTime="2026-01-20 14:52:25.392446247 +0000 UTC m=+141.202277105" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.405314 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mlc47" podStartSLOduration=119.40529314 podStartE2EDuration="1m59.40529314s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.337883468 +0000 UTC m=+141.147714326" watchObservedRunningTime="2026-01-20 14:52:25.40529314 +0000 UTC m=+141.215123998" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.442224 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" podStartSLOduration=120.442206261 podStartE2EDuration="2m0.442206261s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.434977631 +0000 UTC m=+141.244808489" watchObservedRunningTime="2026-01-20 14:52:25.442206261 +0000 UTC m=+141.252037119" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.479864 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.480346 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:25.980325163 +0000 UTC m=+141.790156021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.481934 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9ggcd" podStartSLOduration=119.481922219 podStartE2EDuration="1m59.481922219s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.480763158 +0000 UTC m=+141.290594036" watchObservedRunningTime="2026-01-20 14:52:25.481922219 +0000 UTC m=+141.291753077" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.540090 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kncwj" podStartSLOduration=119.540070451 podStartE2EDuration="1m59.540070451s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.536982384 +0000 UTC m=+141.346813242" watchObservedRunningTime="2026-01-20 14:52:25.540070451 +0000 UTC m=+141.349901299" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.551805 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r9dfg"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.569910 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8mlj4" podStartSLOduration=5.569892337 podStartE2EDuration="5.569892337s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.56501831 +0000 UTC m=+141.374849168" watchObservedRunningTime="2026-01-20 14:52:25.569892337 +0000 UTC m=+141.379723195" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.581416 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.581732 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.081718395 +0000 UTC m=+141.891549253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.597545 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j8fgh"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.640553 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" podStartSLOduration=119.64053552 podStartE2EDuration="1m59.64053552s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.632369369 +0000 UTC m=+141.442200227" watchObservedRunningTime="2026-01-20 14:52:25.64053552 +0000 UTC m=+141.450366378" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.641647 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.672685 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r8xkq" podStartSLOduration=119.672667006 podStartE2EDuration="1m59.672667006s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.669307051 +0000 UTC m=+141.479137909" watchObservedRunningTime="2026-01-20 14:52:25.672667006 +0000 UTC m=+141.482497864" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.682001 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.682320 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.182301298 +0000 UTC m=+141.992132156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.690849 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t"] Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.720733 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podStartSLOduration=119.720712541 podStartE2EDuration="1m59.720712541s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.713927597 +0000 UTC m=+141.523758455" watchObservedRunningTime="2026-01-20 14:52:25.720712541 +0000 UTC m=+141.530543409" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.774167 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" podStartSLOduration=120.77414079 podStartE2EDuration="2m0.77414079s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:25.765981719 +0000 UTC m=+141.575812607" watchObservedRunningTime="2026-01-20 14:52:25.77414079 +0000 UTC m=+141.583971648" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.784723 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.785105 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.285090937 +0000 UTC m=+142.094921805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.863830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" event={"ID":"113494fa-baf7-4f60-9a9c-e8c8d6abb146","Type":"ContainerStarted","Data":"f6284d9327fb4257f82064cb3518053de8a65a539d39a7ce3f622cb7176b100e"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.865452 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" event={"ID":"abb60fa1-1584-4837-890f-888754026b25","Type":"ContainerStarted","Data":"cff4dab721946a8596cca4daa01c28fbee97b92700c0fabbaf4d44f4ae341b63"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.869869 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerStarted","Data":"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.871548 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.873887 4949 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-brlp7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.873959 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.883313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerStarted","Data":"f4877eaf97bd7c4d0e52e4fddc8cae7a451b37b3fd251230d5ececd8dac1c70e"} Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.895587 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:25 crc kubenswrapper[4949]: E0120 14:52:25.896081 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.396061819 +0000 UTC m=+142.205892677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:25 crc kubenswrapper[4949]: I0120 14:52:25.957261 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" event={"ID":"182137c4-babb-4c69-b53d-d37131c3041a","Type":"ContainerStarted","Data":"042cff9994f5feb00d67ab5d027011d818eb7cfce28a6d3bf82bc68002753c5a"} Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.000794 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.001155 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.501139587 +0000 UTC m=+142.310970445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.058062 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j8fgh" event={"ID":"1a0cc344-c778-44a2-a6f6-e2067286c347","Type":"ContainerStarted","Data":"ff8e053dc01feb3ca00a6ffb9d8b998ddf5e18cc4ab24b633e4615219c1f20cf"} Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.058205 4949 patch_prober.go:28] interesting pod/console-operator-58897d9998-mlc47 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.058291 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mlc47" podUID="18aa9682-4716-4c4f-a53e-cc2f312c7c16" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.059168 4949 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ntmdh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.059216 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.071201 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:26 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:26 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:26 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.072036 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.092165 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vzltk" podStartSLOduration=120.092130691 podStartE2EDuration="2m0.092130691s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:26.057797938 +0000 UTC m=+141.867628796" watchObservedRunningTime="2026-01-20 14:52:26.092130691 +0000 UTC m=+141.901961549" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.102584 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.104895 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.604868149 +0000 UTC m=+142.414698997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.104964 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qhn47"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.152808 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.152854 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r"] Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.170536 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b89af20_11f6_4e88_8b1c_5e5ff5b47a70.slice/crio-dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012 WatchSource:0}: Error finding container dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012: Status 404 returned error can't find the container with id dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012 Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.182081 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.199329 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.205745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.211179 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.212020 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8"] Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.212867 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.712846456 +0000 UTC m=+142.522677514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.220574 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bb9s9"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.307034 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.307313 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.80729905 +0000 UTC m=+142.617129908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.346799 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.346853 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.408312 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.408895 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:26.908868366 +0000 UTC m=+142.718699224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.509732 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.509919 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.009886965 +0000 UTC m=+142.819717823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.510136 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.510466 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.010455985 +0000 UTC m=+142.820286843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.548886 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd"] Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.559334 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c06ab34_4b4e_4047_b32d_e9d36c792b1d.slice/crio-f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8 WatchSource:0}: Error finding container f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8: Status 404 returned error can't find the container with id f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8 Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.569499 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.585694 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pcdvd"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.589591 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m8sd9"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.613076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.613376 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.113361748 +0000 UTC m=+142.923192606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.620655 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.625389 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.625446 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49lg4"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.633991 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zvfr4"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.643434 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.653252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lvqj5"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.664015 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn"] Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.673728 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-27qdj"] Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.679818 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b9d1a76_4686_40ae_8b09_e66126088926.slice/crio-9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf WatchSource:0}: Error finding container 9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf: Status 404 returned error can't find the container with id 9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.685737 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod541edc44_7cd7_4c73_a5eb_48e2f5fd69b3.slice/crio-e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1 WatchSource:0}: Error finding container e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1: Status 404 returned error can't find the container with id e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1 Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.693028 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf794ef9_4dc6_4b8b_a2ff_caad4a9ef6c8.slice/crio-5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25 WatchSource:0}: Error finding container 5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25: Status 404 returned error can't find the container with id 5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25 Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.695305 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27518978_3cb4_4732_bc84_13abfa7e9c81.slice/crio-c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72 WatchSource:0}: Error finding container c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72: Status 404 returned error can't find the container with id c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72 Jan 20 14:52:26 crc kubenswrapper[4949]: W0120 14:52:26.717063 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980ff476_0915_44c2_8665_41d9074e3763.slice/crio-a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e WatchSource:0}: Error finding container a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e: Status 404 returned error can't find the container with id a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.717628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.718178 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.218154117 +0000 UTC m=+143.027985185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.820035 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.820532 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.32049851 +0000 UTC m=+143.130329378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:26 crc kubenswrapper[4949]: I0120 14:52:26.924435 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:26 crc kubenswrapper[4949]: E0120 14:52:26.924870 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.424856944 +0000 UTC m=+143.234687802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.000831 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:27 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.000904 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.026297 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.026742 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.526722932 +0000 UTC m=+143.336553780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.129181 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.129906 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.629887104 +0000 UTC m=+143.439717962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.143066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pcdvd" event={"ID":"8b9d1a76-4686-40ae-8b09-e66126088926","Type":"ContainerStarted","Data":"9aef443e20b57551f374efeba0fa40dd4bd9cf4141ae3dacb65c105449fdc8bf"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.149676 4949 patch_prober.go:28] interesting pod/apiserver-76f77b778f-r9kf7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]log ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]etcd ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/max-in-flight-filter ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 20 14:52:27 crc kubenswrapper[4949]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/project.openshift.io-projectcache ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/openshift.io-startinformers ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 20 14:52:27 crc kubenswrapper[4949]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 14:52:27 crc kubenswrapper[4949]: livez check failed Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.149746 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" podUID="1c433a7c-ae2d-4320-b456-58b37bdd5f22" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.155997 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.156227 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.164835 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" event={"ID":"980ff476-0915-44c2-8665-41d9074e3763","Type":"ContainerStarted","Data":"a1a8496e32fca4c4bafa8f47cdbef6bf8fbb1c387899a0f588ffbb3fcfa2d82e"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.171471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" event={"ID":"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70","Type":"ContainerStarted","Data":"66e9827fb3bb82c584bc32f05e0f8f700c6b1501183f3f87fa681461047289a8"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.171539 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" event={"ID":"0b89af20-11f6-4e88-8b1c-5e5ff5b47a70","Type":"ContainerStarted","Data":"dd9470a378ccf8e8aa35d7b21aa0d89ae0daa6b2ac24f2a4f3a783909f831012"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.172037 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.183405 4949 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qw6xk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.183501 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" podUID="0b89af20-11f6-4e88-8b1c-5e5ff5b47a70" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.184244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerStarted","Data":"ac59b7209bc06c22266fbcfb399a323980c8358ea6ee6aa0281732b4df79dc93"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.184275 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerStarted","Data":"2f1015ed4b727aa9d18ec666b0d9f97e94b1811fa98c890bd182b37558631aa1"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.196682 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" event={"ID":"95c38c39-62f0-4343-9628-5070d8cc10b7","Type":"ContainerStarted","Data":"424e1b64a524a47e51b2ba62e9505534b9a06aa01dac67e538cef3ad47d64694"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.216790 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" podStartSLOduration=121.216771886 podStartE2EDuration="2m1.216771886s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:27.212984895 +0000 UTC m=+143.022815753" watchObservedRunningTime="2026-01-20 14:52:27.216771886 +0000 UTC m=+143.026602734" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.217544 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" event={"ID":"10228b44-2c32-4fab-a4f9-c703ef0b6b39","Type":"ContainerStarted","Data":"081f2e4003d8f9201e7929bd8bcde0960e39c5b5577548930dc3c31ada7d254b"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.227139 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" event={"ID":"8516de03-2f1a-43e7-8af0-116378f96b8f","Type":"ContainerStarted","Data":"aea15de4ff2d58551df56dd45a7919d7993fdc247e8dd58aaaaca249023ddcbf"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.231377 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.231710 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.731696331 +0000 UTC m=+143.541527189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.238353 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" event={"ID":"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8","Type":"ContainerStarted","Data":"5050cfcfef85e748d81a2e11d1d9dedb3e1e4d2f47d86642af3963d2e5fe8a25"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.251625 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"e2566b94316dde6fdc7bd6e4758b6125793d2ddc3884b9266b52fb3465d2aed1"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.269668 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" event={"ID":"182137c4-babb-4c69-b53d-d37131c3041a","Type":"ContainerStarted","Data":"03f236ec2ada28f4035f0b8b46c58d61ddedd828ed34153b3fa96773db1d662a"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.272253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" event={"ID":"97a47ded-8ed0-4c5c-8e53-2ff63413b679","Type":"ContainerStarted","Data":"b71802b4a268a551cf59c2f1ff7546ae609bb6b755f54e3afaa3fe1e8a7120bf"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.279676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j8fgh" event={"ID":"1a0cc344-c778-44a2-a6f6-e2067286c347","Type":"ContainerStarted","Data":"b253c8fc26bc2b66b2b01d1ac645f2d2adcf1e7f951e6927b9b103f64c77c4c6"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.284053 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb9s9" event={"ID":"33ca7885-743f-48cd-b3ba-80f9a1f8cf85","Type":"ContainerStarted","Data":"430500d9817934d094dce88f573d7e9fd6f3d6e1e91165afa5647cb2aeac43f0"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.286472 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" event={"ID":"8169cee8-7942-4c7f-92bd-f89e4b027b83","Type":"ContainerStarted","Data":"cd3f3009560b5f0244c6c09fa458917f6d016b986988143f37d5ba45f8141049"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.290970 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" event={"ID":"c47ecb6d-9ecf-480f-b605-4dd91e900521","Type":"ContainerStarted","Data":"dd7ad27a82b648cd382e54d39f4197d808a07dc272fcc81c41f01a5bbb73db24"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.295652 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" event={"ID":"27518978-3cb4-4732-bc84-13abfa7e9c81","Type":"ContainerStarted","Data":"c8978262403da17d486b1d9a46e60fa5f554b1764de138efba9b37f82ea99b72"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.298787 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" event={"ID":"e45c974f-4645-4895-9f73-cfd03e798e00","Type":"ContainerStarted","Data":"564ed981cd69a7c90c77e596e94cb852780bd48591ae11b510942a99316f069d"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.309064 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" event={"ID":"dd8570b5-67a9-4655-bc3e-c36bb6d5c646","Type":"ContainerStarted","Data":"7de177c8c08c4306419bc51015d9456fa67a418318f6ba5d6e6d89e0c979e401"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.312132 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" event={"ID":"4f451eb2-597d-47c6-aa10-66a79776f101","Type":"ContainerStarted","Data":"df2fc0067953b8dd35a8893742174b90514642cc8b0bf6c340d9c8b03301857c"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.313806 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerStarted","Data":"f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.315638 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" event={"ID":"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0","Type":"ContainerStarted","Data":"1ce55b3def8fd7237e29b69d58e832fa5d1e9211aa835135d090d3e30ca5e952"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.333340 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.334497 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.83448162 +0000 UTC m=+143.644312478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.341626 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" event={"ID":"8b8351da-e624-4d42-be80-14e2c90c57f4","Type":"ContainerStarted","Data":"a4bbe073d535d29b5009e40705559141ed90ca4e136177e8ba291c05acba6004"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.348129 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" event={"ID":"3fae0085-f1fb-44ed-b871-0e6fe5072006","Type":"ContainerStarted","Data":"d6dbc3461a89c8d9ee6144afd3d08e6e1240b6ab84b323f030a5dd9b98d36518"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.356583 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerStarted","Data":"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721"} Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.360794 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" podStartSLOduration=121.360780104 podStartE2EDuration="2m1.360780104s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:27.360351381 +0000 UTC m=+143.170182239" watchObservedRunningTime="2026-01-20 14:52:27.360780104 +0000 UTC m=+143.170610962" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.436479 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.436965 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:27.936945187 +0000 UTC m=+143.746776045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.538851 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.543677 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.043651802 +0000 UTC m=+143.853482660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.575058 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.576685 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.588295 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.614790 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-w9d9r" podStartSLOduration=121.614769641 podStartE2EDuration="2m1.614769641s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:27.386836992 +0000 UTC m=+143.196667840" watchObservedRunningTime="2026-01-20 14:52:27.614769641 +0000 UTC m=+143.424600499" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.640243 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.640561 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.140503756 +0000 UTC m=+143.950334614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.641116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.642425 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.142414023 +0000 UTC m=+143.952244941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.668545 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.743990 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.744085 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.244071753 +0000 UTC m=+144.053902611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.744342 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.744622 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.244616062 +0000 UTC m=+144.054446920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.844796 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.845004 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.344977977 +0000 UTC m=+144.154808845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.845088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.845365 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.34535401 +0000 UTC m=+144.155184868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.946768 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.947292 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.44725364 +0000 UTC m=+144.257084498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:27 crc kubenswrapper[4949]: I0120 14:52:27.947457 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:27 crc kubenswrapper[4949]: E0120 14:52:27.947879 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.447872211 +0000 UTC m=+144.257703069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.001620 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:28 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.001731 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.048270 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.048444 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.548418573 +0000 UTC m=+144.358249431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.048595 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.048937 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.548928011 +0000 UTC m=+144.358758869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.149381 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.149541 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.649503024 +0000 UTC m=+144.459333882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.149628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.149907 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.649900648 +0000 UTC m=+144.459731496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.250996 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.251190 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.751161564 +0000 UTC m=+144.560992422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.251464 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.251792 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.751779925 +0000 UTC m=+144.561610783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.352278 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.352491 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.852463332 +0000 UTC m=+144.662294260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.352603 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.352896 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.852884947 +0000 UTC m=+144.662715805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.363240 4949 generic.go:334] "Generic (PLEG): container finished" podID="01bfc821-a8ed-4dbd-a5b1-fa6659a6499f" containerID="ac59b7209bc06c22266fbcfb399a323980c8358ea6ee6aa0281732b4df79dc93" exitCode=0 Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.363285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerDied","Data":"ac59b7209bc06c22266fbcfb399a323980c8358ea6ee6aa0281732b4df79dc93"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.364406 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerStarted","Data":"f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.365299 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" event={"ID":"3fae0085-f1fb-44ed-b871-0e6fe5072006","Type":"ContainerStarted","Data":"e167b7406760722ef8a3145bad0c131e20ff501075140a3dffa9b6c8f24160f0"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.366844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" event={"ID":"27518978-3cb4-4732-bc84-13abfa7e9c81","Type":"ContainerStarted","Data":"6e637e7527d8d56982e8c05a19643930ca6ce823128b30710e128f64e5621d73"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.370096 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" event={"ID":"dd8570b5-67a9-4655-bc3e-c36bb6d5c646","Type":"ContainerStarted","Data":"cde8ce65ee3d95e1e2f3fe2c8ba0eef3cf5a5e5043feffee539b66d015a35cd3"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.372398 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" event={"ID":"4f451eb2-597d-47c6-aa10-66a79776f101","Type":"ContainerStarted","Data":"c97bfe2c8e22701445e6f3a42cd869302d25270da2734d343fe48bb6f3efbc63"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.378449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" event={"ID":"ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0","Type":"ContainerStarted","Data":"e4259156daa53f7e6836156dae598788fb2ca291db48d1d430689acae0308801"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.379576 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.381834 4949 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mtrqm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.381871 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" podUID="ff4eb484-f97a-4641-b5fd-aa8e4ad62bd0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.382205 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" event={"ID":"c47ecb6d-9ecf-480f-b605-4dd91e900521","Type":"ContainerStarted","Data":"4a0808c77baeac9aa271bd8d77baa686a1e07d75220138ea25f508db2bf9f36a"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.383489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" event={"ID":"95c38c39-62f0-4343-9628-5070d8cc10b7","Type":"ContainerStarted","Data":"769db8d1fefe7b3b7dea2f892965a83fad2f71bab86e99631546c605de94eabb"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.393488 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pcdvd" event={"ID":"8b9d1a76-4686-40ae-8b09-e66126088926","Type":"ContainerStarted","Data":"248d60b18e845320dde8578674bcefc230ab12222172e5999e2351618aa377fc"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.398460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" event={"ID":"97a47ded-8ed0-4c5c-8e53-2ff63413b679","Type":"ContainerStarted","Data":"7d4a06039a6cf2ee96d82d36197e2c112b483911a8f255e4243ee78937cbfa31"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.400465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" event={"ID":"10228b44-2c32-4fab-a4f9-c703ef0b6b39","Type":"ContainerStarted","Data":"5c0b34b53d600ac0058ee2cc3d7e7e252c0aec4eec10d97975c14049813246b9"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.401093 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.401901 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hfg2r" event={"ID":"8b8351da-e624-4d42-be80-14e2c90c57f4","Type":"ContainerStarted","Data":"b123d80c9c294cebda319fb19dd5f8c84f5f8bd2e873afb724bc1cab5331e3f6"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.403832 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" event={"ID":"980ff476-0915-44c2-8665-41d9074e3763","Type":"ContainerStarted","Data":"5538309a571039b1ab8b68f878dfd217d02dc20f8d0cb090233a377f56e23b84"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.404131 4949 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xxm4k container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.404187 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" podUID="10228b44-2c32-4fab-a4f9-c703ef0b6b39" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.407945 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb9s9" event={"ID":"33ca7885-743f-48cd-b3ba-80f9a1f8cf85","Type":"ContainerStarted","Data":"fa8d03737761ebb7770d3876c0c9250a9dba142993f1bbae8225ae59f81887b8"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.408119 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.408552 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" podStartSLOduration=122.408510582 podStartE2EDuration="2m2.408510582s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.406944729 +0000 UTC m=+144.216775587" watchObservedRunningTime="2026-01-20 14:52:28.408510582 +0000 UTC m=+144.218341430" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.416093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" event={"ID":"8169cee8-7942-4c7f-92bd-f89e4b027b83","Type":"ContainerStarted","Data":"ac36a34f4a6302371da08b626f0511942ccffe7f7de56791bb78a7fc7179a13d"} Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.409781 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.416917 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.417240 4949 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qw6xk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.417262 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" podUID="0b89af20-11f6-4e88-8b1c-5e5ff5b47a70" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.426790 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c8phr" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.432641 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5rgxn" podStartSLOduration=122.432586792 podStartE2EDuration="2m2.432586792s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.423496569 +0000 UTC m=+144.233327447" watchObservedRunningTime="2026-01-20 14:52:28.432586792 +0000 UTC m=+144.242417650" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.441205 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" podStartSLOduration=122.441185127 podStartE2EDuration="2m2.441185127s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.438715382 +0000 UTC m=+144.248546240" watchObservedRunningTime="2026-01-20 14:52:28.441185127 +0000 UTC m=+144.251015985" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.453423 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.453603 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.953577494 +0000 UTC m=+144.763408352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.455493 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.456704 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cx9z8" podStartSLOduration=122.456687261 podStartE2EDuration="2m2.456687261s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.453258673 +0000 UTC m=+144.263089541" watchObservedRunningTime="2026-01-20 14:52:28.456687261 +0000 UTC m=+144.266518119" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.458475 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:28.958461932 +0000 UTC m=+144.768292790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.473751 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bb9s9" podStartSLOduration=122.473731118 podStartE2EDuration="2m2.473731118s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.47205552 +0000 UTC m=+144.281886378" watchObservedRunningTime="2026-01-20 14:52:28.473731118 +0000 UTC m=+144.283561976" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.493419 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" podStartSLOduration=122.493399376 podStartE2EDuration="2m2.493399376s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.491039225 +0000 UTC m=+144.300870083" watchObservedRunningTime="2026-01-20 14:52:28.493399376 +0000 UTC m=+144.303230224" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.533057 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d5t2m" podStartSLOduration=122.533038821 podStartE2EDuration="2m2.533038821s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:28.531428405 +0000 UTC m=+144.341259263" watchObservedRunningTime="2026-01-20 14:52:28.533038821 +0000 UTC m=+144.342869679" Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.557206 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.559896 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.059872104 +0000 UTC m=+144.869703042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.659149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.659550 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.159535346 +0000 UTC m=+144.969366204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.760261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.760409 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.260390019 +0000 UTC m=+145.070220887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.760924 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.761253 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.261242368 +0000 UTC m=+145.071073226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.862389 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.862536 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.362503065 +0000 UTC m=+145.172333923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.862675 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.863020 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.363009653 +0000 UTC m=+145.172840511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.963629 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.963762 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.463744482 +0000 UTC m=+145.273575340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.964101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:28 crc kubenswrapper[4949]: E0120 14:52:28.964374 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.464366993 +0000 UTC m=+145.274197851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.997687 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:28 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:28 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:28 crc kubenswrapper[4949]: I0120 14:52:28.997760 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.065029 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.065455 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.565436954 +0000 UTC m=+145.375267812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.118913 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.167283 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.168152 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.668131549 +0000 UTC m=+145.477962407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.268485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.268693 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.768667401 +0000 UTC m=+145.578498259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.268862 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.269124 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.769113427 +0000 UTC m=+145.578944285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.370291 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.370470 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.870446006 +0000 UTC m=+145.680276864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.370738 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.371102 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.871089618 +0000 UTC m=+145.680920476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.423983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" event={"ID":"8169cee8-7942-4c7f-92bd-f89e4b027b83","Type":"ContainerStarted","Data":"8b27e97a3a49b129c1e2c504b175540c2f99106cc06d1e35195868a0b3324986"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.425469 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"13f137922c00c0cdf0032d9d80ba9b1ff8fddb596158409726cf210b5b4cb664"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.427321 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" event={"ID":"01bfc821-a8ed-4dbd-a5b1-fa6659a6499f","Type":"ContainerStarted","Data":"00050e542dc85b15c17d95b44edc637e6dd0cecd210f0f6c9937c9ef4015e132"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.427464 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.430255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" event={"ID":"182137c4-babb-4c69-b53d-d37131c3041a","Type":"ContainerStarted","Data":"451df6a41dcbb24907ee38dbf28eb1ccb074f689112d9838bfda128a97ab0cae"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.432207 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" event={"ID":"97a47ded-8ed0-4c5c-8e53-2ff63413b679","Type":"ContainerStarted","Data":"4db7b9cde5c6b30778f765bc87f2a476bcbb865361bb82e84bd905821869e1f8"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.432329 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.434228 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" event={"ID":"8516de03-2f1a-43e7-8af0-116378f96b8f","Type":"ContainerStarted","Data":"918de21dc6f04b4e7cfdc425de78cd179f7331133bcb46dc030f66710e1b4c50"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.434275 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" event={"ID":"8516de03-2f1a-43e7-8af0-116378f96b8f","Type":"ContainerStarted","Data":"a833e75596fde7deef6100be6af7719851d5b8f4ec76c72cd2aaba4a91b9f2d3"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.436045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" event={"ID":"3fae0085-f1fb-44ed-b871-0e6fe5072006","Type":"ContainerStarted","Data":"274bc8a72730e898c80a9ff7c7ad46ef05b7f1572184473f729a1d7550fc8b43"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.438080 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" event={"ID":"df794ef9-4dc6-4b8b-a2ff-caad4a9ef6c8","Type":"ContainerStarted","Data":"54d29b1ba35eb7f72b23fc34304c450d006507692651cf5dbd71a87c85cd3e7a"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.439865 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" event={"ID":"c47ecb6d-9ecf-480f-b605-4dd91e900521","Type":"ContainerStarted","Data":"5e7a5fa75e778cef7d12363811f6517dc5566793acad091725ed76bcbeee2345"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.441822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" event={"ID":"e45c974f-4645-4895-9f73-cfd03e798e00","Type":"ContainerStarted","Data":"ac04aaec015ed06b9d108fd834a620cd7794a897c430383cc6ba7d6f0ce22fe4"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.444269 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.444309 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.444944 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j8fgh" event={"ID":"1a0cc344-c778-44a2-a6f6-e2067286c347","Type":"ContainerStarted","Data":"bfa7d01031966304a0f3591c2f8259b9c4c472ac4d4c0c02ed28d908183bd2ea"} Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.445103 4949 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xxm4k container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.445129 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" podUID="10228b44-2c32-4fab-a4f9-c703ef0b6b39" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.445618 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.459428 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lzfzd" podStartSLOduration=123.459407489 podStartE2EDuration="2m3.459407489s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.457834185 +0000 UTC m=+145.267665043" watchObservedRunningTime="2026-01-20 14:52:29.459407489 +0000 UTC m=+145.269238347" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.469971 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtrqm" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.475002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.476017 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:29.975998921 +0000 UTC m=+145.785829779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.513539 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" podStartSLOduration=123.513497021 podStartE2EDuration="2m3.513497021s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.488721368 +0000 UTC m=+145.298552236" watchObservedRunningTime="2026-01-20 14:52:29.513497021 +0000 UTC m=+145.323327879" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.515555 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-btnmm" podStartSLOduration=123.515540882 podStartE2EDuration="2m3.515540882s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.511389449 +0000 UTC m=+145.321220337" watchObservedRunningTime="2026-01-20 14:52:29.515540882 +0000 UTC m=+145.325371740" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.537656 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vr5fk" podStartSLOduration=123.537629773 podStartE2EDuration="2m3.537629773s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.535550741 +0000 UTC m=+145.345381629" watchObservedRunningTime="2026-01-20 14:52:29.537629773 +0000 UTC m=+145.347460641" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.577468 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zvfr4" podStartSLOduration=123.577449534 podStartE2EDuration="2m3.577449534s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.574367938 +0000 UTC m=+145.384198796" watchObservedRunningTime="2026-01-20 14:52:29.577449534 +0000 UTC m=+145.387280392" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.586701 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.586974 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.086964891 +0000 UTC m=+145.896795749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.602555 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-j8fgh" podStartSLOduration=9.602531718 podStartE2EDuration="9.602531718s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.599169362 +0000 UTC m=+145.409000230" watchObservedRunningTime="2026-01-20 14:52:29.602531718 +0000 UTC m=+145.412362576" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.643386 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pcdvd" podStartSLOduration=9.643367914 podStartE2EDuration="9.643367914s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.642322757 +0000 UTC m=+145.452153625" watchObservedRunningTime="2026-01-20 14:52:29.643367914 +0000 UTC m=+145.453198772" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.643754 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" podStartSLOduration=123.643739526 podStartE2EDuration="2m3.643739526s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.626654008 +0000 UTC m=+145.436484866" watchObservedRunningTime="2026-01-20 14:52:29.643739526 +0000 UTC m=+145.453570384" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.686216 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-r9dfg" podStartSLOduration=123.686197159 podStartE2EDuration="2m3.686197159s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.665290638 +0000 UTC m=+145.475121496" watchObservedRunningTime="2026-01-20 14:52:29.686197159 +0000 UTC m=+145.496028017" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.687863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.688124 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.188109864 +0000 UTC m=+145.997940722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.703984 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-27qdj" podStartSLOduration=123.70396384 podStartE2EDuration="2m3.70396384s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.685609498 +0000 UTC m=+145.495440366" watchObservedRunningTime="2026-01-20 14:52:29.70396384 +0000 UTC m=+145.513794698" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.751405 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49lg4" podStartSLOduration=123.751385783 podStartE2EDuration="2m3.751385783s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.749215168 +0000 UTC m=+145.559046026" watchObservedRunningTime="2026-01-20 14:52:29.751385783 +0000 UTC m=+145.561216641" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.752654 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5j28t" podStartSLOduration=124.752643496 podStartE2EDuration="2m4.752643496s" podCreationTimestamp="2026-01-20 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.70626322 +0000 UTC m=+145.516094078" watchObservedRunningTime="2026-01-20 14:52:29.752643496 +0000 UTC m=+145.562474354" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.780190 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-m8sd9" podStartSLOduration=123.780168654 podStartE2EDuration="2m3.780168654s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:29.779709628 +0000 UTC m=+145.589540486" watchObservedRunningTime="2026-01-20 14:52:29.780168654 +0000 UTC m=+145.589999522" Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.790372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.790796 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.290777659 +0000 UTC m=+146.100608517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.891997 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.892397 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.392376408 +0000 UTC m=+146.202207266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.892828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.893160 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.393148045 +0000 UTC m=+146.202978903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:29 crc kubenswrapper[4949]: I0120 14:52:29.993540 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:29 crc kubenswrapper[4949]: E0120 14:52:29.994039 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.494017868 +0000 UTC m=+146.303848726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:29.999435 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:30 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:29.999494 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.094759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.095103 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.595088068 +0000 UTC m=+146.404918926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.196319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.196786 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.696753259 +0000 UTC m=+146.506584117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.298035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.298343 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.798331317 +0000 UTC m=+146.608162175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.398804 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.399185 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:30.899167329 +0000 UTC m=+146.708998187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.454710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"51abf2cde66e3374d1216323da07ae7e0617d8bda55b3f0ca19d44fdeabdff67"} Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.457948 4949 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.458075 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.458115 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.504388 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.507639 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.007622413 +0000 UTC m=+146.817453271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.606426 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.606849 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.10682892 +0000 UTC m=+146.916659788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.708230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.708578 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.208564923 +0000 UTC m=+147.018395781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.809068 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.809274 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.3092465 +0000 UTC m=+147.119077348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.809383 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.809891 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.309875881 +0000 UTC m=+147.119706739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.910629 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.910854 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.410822537 +0000 UTC m=+147.220653405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.910913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:30 crc kubenswrapper[4949]: E0120 14:52:30.911253 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 14:52:31.411242682 +0000 UTC m=+147.221073600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x8799" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.941584 4949 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T14:52:30.457969644Z","Handler":null,"Name":""} Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.946030 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.946918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.948912 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.950483 4949 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.950535 4949 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.970548 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.996509 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:30 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:30 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:30 crc kubenswrapper[4949]: I0120 14:52:30.996591 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.011628 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.011930 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.012132 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.012168 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.015535 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.112891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.112938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.112974 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.113000 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.113850 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.114224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.116052 4949 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.116084 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.136097 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"community-operators-jpqvc\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.143626 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.144794 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.146102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x8799\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.154893 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.156307 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.213620 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.213660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.213683 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.260816 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.314951 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315047 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315189 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315562 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.315671 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.336646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"certified-operators-sr2h8\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.348209 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.352130 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.357292 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-r9kf7" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.363111 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.364481 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.375325 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.418553 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.418992 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.419046 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.468606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.479297 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"8ba0a411ba3c4385deefeb5e66683996b46a54845f50557b17601a3469b0487f"} Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.479346 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" event={"ID":"541edc44-7cd7-4c73-a5eb-48e2f5fd69b3","Type":"ContainerStarted","Data":"3c5a06049a4e031cae18188af6a5d7ab1b23ee4921febe21c4daf20e2466c8d2"} Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.526657 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.526741 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.526799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.531067 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.531436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.541533 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lvqj5" podStartSLOduration=11.541505824 podStartE2EDuration="11.541505824s" podCreationTimestamp="2026-01-20 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:31.529445568 +0000 UTC m=+147.339276426" watchObservedRunningTime="2026-01-20 14:52:31.541505824 +0000 UTC m=+147.351336682" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.542807 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.543721 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.556836 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"community-operators-5llwq\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.593383 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.626417 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.629568 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.629645 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.629723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.701388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.730925 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.731197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.731251 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.731771 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.732065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.747769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"certified-operators-mhc4n\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.870881 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.923214 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:52:31 crc kubenswrapper[4949]: W0120 14:52:31.937865 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf1fd354_0dd7_4186_b8f7_eb06991f4632.slice/crio-e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3 WatchSource:0}: Error finding container e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3: Status 404 returned error can't find the container with id e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3 Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.966246 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:52:31 crc kubenswrapper[4949]: W0120 14:52:31.984191 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod595f245f_676f_4ef1_8073_5e235b4a338a.slice/crio-75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9 WatchSource:0}: Error finding container 75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9: Status 404 returned error can't find the container with id 75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9 Jan 20 14:52:31 crc kubenswrapper[4949]: I0120 14:52:31.999661 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:31 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:31 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:31 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:31.999975 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.056004 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.074387 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.075276 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.079001 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.079260 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.087197 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.141704 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.141791 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.143326 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.243141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.243210 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.243279 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.290060 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.393969 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qhn47" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.437204 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.488152 4949 generic.go:334] "Generic (PLEG): container finished" podID="78cf28ec-e605-49c2-882a-5cb98697605b" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.488392 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.488444 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerStarted","Data":"cfc38db22b8953300879f0bf00176a88bf6635c28a6beffd49284a3128d08941"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.492892 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.495452 4949 generic.go:334] "Generic (PLEG): container finished" podID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.495494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.495542 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerStarted","Data":"5386e0b6f5f81c0affeb756c00a742c0370df0824ff74eddb71abeead647e2e6"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.503773 4949 generic.go:334] "Generic (PLEG): container finished" podID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerID="f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.503854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerDied","Data":"f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.509277 4949 generic.go:334] "Generic (PLEG): container finished" podID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerID="e457b0da5f8d7f599c13928f4a9416d0d3623297c6f14359bad682b4ffdc7a4a" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.509357 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"e457b0da5f8d7f599c13928f4a9416d0d3623297c6f14359bad682b4ffdc7a4a"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.509391 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerStarted","Data":"461db7652293e0c019275b02f84835686fdbeffea7ab03b5ba355fd27be457ec"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.518488 4949 generic.go:334] "Generic (PLEG): container finished" podID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" exitCode=0 Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.518604 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.518637 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerStarted","Data":"e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.543362 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerStarted","Data":"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.543747 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerStarted","Data":"75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9"} Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.543887 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548051 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548111 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.548201 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.552289 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.554682 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.555137 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.568417 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.636652 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.662529 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" podStartSLOduration=126.662489804 podStartE2EDuration="2m6.662489804s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:52:32.632005815 +0000 UTC m=+148.441836673" watchObservedRunningTime="2026-01-20 14:52:32.662489804 +0000 UTC m=+148.472320672" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.673391 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.780778 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.782088 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.798869 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.810435 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.918191 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mlc47" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.950051 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.957578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.957701 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.960940 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.992827 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.998259 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:32 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Jan 20 14:52:32 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:32 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:32 crc kubenswrapper[4949]: I0120 14:52:32.998309 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.058121 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.058181 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.058290 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: W0120 14:52:33.120726 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629 WatchSource:0}: Error finding container 95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629: Status 404 returned error can't find the container with id 95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629 Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.159488 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.159593 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.159633 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.160009 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.160684 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: W0120 14:52:33.175060 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08 WatchSource:0}: Error finding container cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08: Status 404 returned error can't find the container with id cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08 Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.175454 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"redhat-marketplace-n6ccj\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206372 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206428 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206480 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb9s9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.206538 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bb9s9" podUID="33ca7885-743f-48cd-b3ba-80f9a1f8cf85" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.214189 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.214244 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.216068 4949 patch_prober.go:28] interesting pod/console-f9d7485db-w9d9r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.216123 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-w9d9r" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.279063 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.340011 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.341024 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.349180 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.362975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.363008 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.363274 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.412753 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xxm4k" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.457194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qw6xk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.465006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.465070 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.465133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.466241 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.466735 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.488778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"redhat-marketplace-qwbjk\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.567113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"95a7913babc243eb33944862784ff747e52a844e879ac29618e2f010874a0629"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.568263 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cfcb0c9ac1ad96cdb649fcbc577ab730a118eb76f0638527c97b8f6949c80e08"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.569893 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9b0db30d-95d1-498f-8c18-5dd0a553d48f","Type":"ContainerStarted","Data":"8080bdd092a4ff8f5e40d9fb328f96b240d8ae56e960e70632705b400d1ab276"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.571134 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ae626303c8f63ce3416f52b81cac4275bb0c82ef7397eda9495ff514cb6ded8b"} Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.664211 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.694877 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:52:33 crc kubenswrapper[4949]: W0120 14:52:33.701874 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da2cb76_6534_4d77_95c0_3d6aaff0de4b.slice/crio-2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c WatchSource:0}: Error finding container 2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c: Status 404 returned error can't find the container with id 2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.832055 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.870887 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") pod \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.870940 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") pod \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.871041 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") pod \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\" (UID: \"8c06ab34-4b4e-4047-b32d-e9d36c792b1d\") " Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.871933 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c06ab34-4b4e-4047-b32d-e9d36c792b1d" (UID: "8c06ab34-4b4e-4047-b32d-e9d36c792b1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.878197 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq" (OuterVolumeSpecName: "kube-api-access-nn2mq") pod "8c06ab34-4b4e-4047-b32d-e9d36c792b1d" (UID: "8c06ab34-4b4e-4047-b32d-e9d36c792b1d"). InnerVolumeSpecName "kube-api-access-nn2mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.895732 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c06ab34-4b4e-4047-b32d-e9d36c792b1d" (UID: "8c06ab34-4b4e-4047-b32d-e9d36c792b1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.972763 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn2mq\" (UniqueName: \"kubernetes.io/projected/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-kube-api-access-nn2mq\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.972789 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.972797 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c06ab34-4b4e-4047-b32d-e9d36c792b1d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.996933 4949 patch_prober.go:28] interesting pod/router-default-5444994796-kncwj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 14:52:33 crc kubenswrapper[4949]: [+]has-synced ok Jan 20 14:52:33 crc kubenswrapper[4949]: [+]process-running ok Jan 20 14:52:33 crc kubenswrapper[4949]: healthz check failed Jan 20 14:52:33 crc kubenswrapper[4949]: I0120 14:52:33.997062 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kncwj" podUID="fe950de2-c48d-481b-a5fc-c943fe124904" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.142268 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:52:34 crc kubenswrapper[4949]: E0120 14:52:34.142615 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerName="collect-profiles" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.142631 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerName="collect-profiles" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.142788 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" containerName="collect-profiles" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.143691 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.143852 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.145872 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.151123 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.279096 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.279570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.280217 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.381195 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.381308 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.381374 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.382296 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.382778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.406805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"redhat-operators-p7gt2\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.476870 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.544503 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.551682 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.562583 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.597296 4949 generic.go:334] "Generic (PLEG): container finished" podID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" exitCode=0 Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.597373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.597410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerStarted","Data":"2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.599720 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2cc000bc63b42e439848876226fd338380a4a30edbcad8125873ef756ca46287"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.602651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" event={"ID":"8c06ab34-4b4e-4047-b32d-e9d36c792b1d","Type":"ContainerDied","Data":"f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.602674 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f253317d2a383e9011c6a7316753bb28e71257c93786ca3e89495a09232780e8" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.602691 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.605231 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"86921bad44dd07af0ecd8d9c11d27e021c1063acd7c271a109b11de4e3de4505"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.608827 4949 generic.go:334] "Generic (PLEG): container finished" podID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerID="3ab19178e8e5b3e5d35695a1e839bb6906d49d06b56d49738a525f3707f21354" exitCode=0 Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.608874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9b0db30d-95d1-498f-8c18-5dd0a553d48f","Type":"ContainerDied","Data":"3ab19178e8e5b3e5d35695a1e839bb6906d49d06b56d49738a525f3707f21354"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.636462 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"485f0912a018177bdbfd4999745a51a501edd183b90801dc7a2d128bbf797b1b"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.636867 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.660322 4949 generic.go:334] "Generic (PLEG): container finished" podID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerID="ce75b13bd7f1b8e95f0b7ca8644b4475c13ac79f0a7f60da9f3dac9e11e95a9e" exitCode=0 Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.660370 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"ce75b13bd7f1b8e95f0b7ca8644b4475c13ac79f0a7f60da9f3dac9e11e95a9e"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.660412 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerStarted","Data":"30a76834740fb17389d4718b7b04b96d874c290be714a19e58cb218d3172d38f"} Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.686910 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.687020 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.687039 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.797569 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.797865 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.797891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.800416 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.800627 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.812451 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.829609 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"redhat-operators-qnf74\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:34 crc kubenswrapper[4949]: I0120 14:52:34.881222 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.001256 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.003801 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kncwj" Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.193980 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.703961 4949 generic.go:334] "Generic (PLEG): container finished" podID="2747a148-c24a-4d08-a2ca-19261c14c359" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" exitCode=0 Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.704040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688"} Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.704363 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerStarted","Data":"c64e483ea895830221bcb3fd9971d012c5d2f19d12679860699582d93fd37367"} Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.719014 4949 generic.go:334] "Generic (PLEG): container finished" podID="13eef670-55b3-4832-a856-fe2bf8239996" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" exitCode=0 Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.719090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166"} Jan 20 14:52:35 crc kubenswrapper[4949]: I0120 14:52:35.719153 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerStarted","Data":"7033fb6c503e5baf2b93082863e51771e454c06c8d508e3b8282afa6c65fa61f"} Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.166006 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335136 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") pod \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335230 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9b0db30d-95d1-498f-8c18-5dd0a553d48f" (UID: "9b0db30d-95d1-498f-8c18-5dd0a553d48f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335339 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") pod \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\" (UID: \"9b0db30d-95d1-498f-8c18-5dd0a553d48f\") " Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.335664 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.343969 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9b0db30d-95d1-498f-8c18-5dd0a553d48f" (UID: "9b0db30d-95d1-498f-8c18-5dd0a553d48f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.436595 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b0db30d-95d1-498f-8c18-5dd0a553d48f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.761143 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9b0db30d-95d1-498f-8c18-5dd0a553d48f","Type":"ContainerDied","Data":"8080bdd092a4ff8f5e40d9fb328f96b240d8ae56e960e70632705b400d1ab276"} Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.761241 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8080bdd092a4ff8f5e40d9fb328f96b240d8ae56e960e70632705b400d1ab276" Jan 20 14:52:36 crc kubenswrapper[4949]: I0120 14:52:36.761366 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.696202 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 14:52:37 crc kubenswrapper[4949]: E0120 14:52:37.696467 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerName="pruner" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.696481 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerName="pruner" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.696625 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0db30d-95d1-498f-8c18-5dd0a553d48f" containerName="pruner" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.697072 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.699216 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.699430 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.703485 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.855678 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.855743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.956877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.956942 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:37 crc kubenswrapper[4949]: I0120 14:52:37.957038 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.102511 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.163939 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-j8fgh" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.321453 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:38 crc kubenswrapper[4949]: I0120 14:52:38.898559 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 14:52:39 crc kubenswrapper[4949]: I0120 14:52:39.962556 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerStarted","Data":"49cec00a653f42564610c9cde3991b107affc5b57f5a5a64434b6f8195cffe5a"} Jan 20 14:52:39 crc kubenswrapper[4949]: I0120 14:52:39.962880 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerStarted","Data":"c1539a308fd744784b438cad04e9e177327a8b5e8d1d0de143603f94c11340c8"} Jan 20 14:52:40 crc kubenswrapper[4949]: I0120 14:52:40.971869 4949 generic.go:334] "Generic (PLEG): container finished" podID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerID="49cec00a653f42564610c9cde3991b107affc5b57f5a5a64434b6f8195cffe5a" exitCode=0 Jan 20 14:52:40 crc kubenswrapper[4949]: I0120 14:52:40.971917 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerDied","Data":"49cec00a653f42564610c9cde3991b107affc5b57f5a5a64434b6f8195cffe5a"} Jan 20 14:52:43 crc kubenswrapper[4949]: I0120 14:52:43.211538 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bb9s9" Jan 20 14:52:43 crc kubenswrapper[4949]: I0120 14:52:43.249325 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:43 crc kubenswrapper[4949]: I0120 14:52:43.255526 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.720252 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.726994 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa4eae9d-b492-4fd3-8baf-38ed726d9e4c-metrics-certs\") pod \"network-metrics-daemon-hlfls\" (UID: \"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c\") " pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.859906 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.924627 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") pod \"688f7366-a782-4bc1-af28-3ac607a6e5ee\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.924865 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") pod \"688f7366-a782-4bc1-af28-3ac607a6e5ee\" (UID: \"688f7366-a782-4bc1-af28-3ac607a6e5ee\") " Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.925041 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "688f7366-a782-4bc1-af28-3ac607a6e5ee" (UID: "688f7366-a782-4bc1-af28-3ac607a6e5ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.925483 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688f7366-a782-4bc1-af28-3ac607a6e5ee-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:48 crc kubenswrapper[4949]: I0120 14:52:48.928492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "688f7366-a782-4bc1-af28-3ac607a6e5ee" (UID: "688f7366-a782-4bc1-af28-3ac607a6e5ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.000133 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hlfls" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.027201 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/688f7366-a782-4bc1-af28-3ac607a6e5ee-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.044449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"688f7366-a782-4bc1-af28-3ac607a6e5ee","Type":"ContainerDied","Data":"c1539a308fd744784b438cad04e9e177327a8b5e8d1d0de143603f94c11340c8"} Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.044761 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1539a308fd744784b438cad04e9e177327a8b5e8d1d0de143603f94c11340c8" Jan 20 14:52:49 crc kubenswrapper[4949]: I0120 14:52:49.044505 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 14:52:51 crc kubenswrapper[4949]: I0120 14:52:51.354703 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:52:57 crc kubenswrapper[4949]: I0120 14:52:57.152784 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:52:57 crc kubenswrapper[4949]: I0120 14:52:57.153183 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:53:03 crc kubenswrapper[4949]: I0120 14:53:03.665090 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5g8xw" Jan 20 14:53:09 crc kubenswrapper[4949]: E0120 14:53:09.444079 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 14:53:09 crc kubenswrapper[4949]: E0120 14:53:09.444875 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kvhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jpqvc_openshift-marketplace(78cf28ec-e605-49c2-882a-5cb98697605b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:09 crc kubenswrapper[4949]: E0120 14:53:09.446488 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jpqvc" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.618870 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jpqvc" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.678853 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.679036 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89h6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mhc4n_openshift-marketplace(7bad3a1d-1239-429c-b5a5-96f0bc2570ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:10 crc kubenswrapper[4949]: E0120 14:53:10.680230 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mhc4n" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.399105 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.531936 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 14:53:11 crc kubenswrapper[4949]: E0120 14:53:11.532157 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerName="pruner" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.532168 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerName="pruner" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.532271 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="688f7366-a782-4bc1-af28-3ac607a6e5ee" containerName="pruner" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.532716 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.541586 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.541849 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.542932 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.632100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.632162 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.735137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.735208 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.735302 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.767750 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:11 crc kubenswrapper[4949]: I0120 14:53:11.889697 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.229239 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mhc4n" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.294735 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.294949 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skxx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qwbjk_openshift-marketplace(461b9e2b-6f01-4719-946b-3c8266281ea4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:12 crc kubenswrapper[4949]: E0120 14:53:12.296141 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qwbjk" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" Jan 20 14:53:12 crc kubenswrapper[4949]: I0120 14:53:12.784277 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.619872 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qwbjk" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.693301 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.693504 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cndjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qnf74_openshift-marketplace(13eef670-55b3-4832-a856-fe2bf8239996): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.694773 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.717009 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.717509 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89v9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5llwq_openshift-marketplace(df1fd354-0dd7-4186-b8f7-eb06991f4632): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.718831 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5llwq" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.750095 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.750255 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sr2h8_openshift-marketplace(8827d4ac-468d-4ceb-91c1-fb310a00ddcd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.751442 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sr2h8" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.757961 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.758086 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9xwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n6ccj_openshift-marketplace(3da2cb76-6534-4d77-95c0-3d6aaff0de4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.759370 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n6ccj" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.781172 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.781355 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr8jf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-p7gt2_openshift-marketplace(2747a148-c24a-4d08-a2ca-19261c14c359): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 14:53:15 crc kubenswrapper[4949]: E0120 14:53:15.782581 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-p7gt2" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" Jan 20 14:53:15 crc kubenswrapper[4949]: I0120 14:53:15.809250 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hlfls"] Jan 20 14:53:15 crc kubenswrapper[4949]: W0120 14:53:15.822549 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa4eae9d_b492_4fd3_8baf_38ed726d9e4c.slice/crio-7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48 WatchSource:0}: Error finding container 7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48: Status 404 returned error can't find the container with id 7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48 Jan 20 14:53:15 crc kubenswrapper[4949]: I0120 14:53:15.851505 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 14:53:15 crc kubenswrapper[4949]: W0120 14:53:15.852608 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod45fa4b40_fafa_4aad_ac39_41cc0503c52a.slice/crio-64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce WatchSource:0}: Error finding container 64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce: Status 404 returned error can't find the container with id 64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.178709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hlfls" event={"ID":"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c","Type":"ContainerStarted","Data":"bfcbdf5bb88213505bc39059b435103825dec717d2ce2c1d476ad3022dc63743"} Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.179082 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hlfls" event={"ID":"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c","Type":"ContainerStarted","Data":"7e3560e4d082d172d5c1a8fd5beab27c49c6d48b0ebcf9ec7be6338ae4bffb48"} Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.180913 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"45fa4b40-fafa-4aad-ac39-41cc0503c52a","Type":"ContainerStarted","Data":"64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce"} Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181321 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181806 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-p7gt2" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181878 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n6ccj" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.181972 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5llwq" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" Jan 20 14:53:16 crc kubenswrapper[4949]: E0120 14:53:16.182101 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sr2h8" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.289167 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.290055 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.305876 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.395927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.395989 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.396193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.496932 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497066 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497104 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.497178 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.520928 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"installer-9-crc\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:16 crc kubenswrapper[4949]: I0120 14:53:16.637035 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.025345 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 14:53:17 crc kubenswrapper[4949]: W0120 14:53:17.031436 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7865dcac_fc72_4c7f_bd57_11f1c3bbb404.slice/crio-88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758 WatchSource:0}: Error finding container 88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758: Status 404 returned error can't find the container with id 88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758 Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.187442 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerStarted","Data":"88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758"} Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.189197 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hlfls" event={"ID":"fa4eae9d-b492-4fd3-8baf-38ed726d9e4c","Type":"ContainerStarted","Data":"32959a5386c9d5cf748fe8c258b13e375a60ae0d6ca180890d6058f7fe333898"} Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.190805 4949 generic.go:334] "Generic (PLEG): container finished" podID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerID="db5e90400fc32755b747912352b53318f41bc09940e363f2562a5b96d6685824" exitCode=0 Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.190836 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"45fa4b40-fafa-4aad-ac39-41cc0503c52a","Type":"ContainerDied","Data":"db5e90400fc32755b747912352b53318f41bc09940e363f2562a5b96d6685824"} Jan 20 14:53:17 crc kubenswrapper[4949]: I0120 14:53:17.224614 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hlfls" podStartSLOduration=171.224595567 podStartE2EDuration="2m51.224595567s" podCreationTimestamp="2026-01-20 14:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:53:17.203784451 +0000 UTC m=+193.013615309" watchObservedRunningTime="2026-01-20 14:53:17.224595567 +0000 UTC m=+193.034426425" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.203817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerStarted","Data":"2962c26b419791e2bd3317ef6ff1beb0505dcf5c1382dca84f101cbe4881711f"} Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.227123 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.226972523 podStartE2EDuration="2.226972523s" podCreationTimestamp="2026-01-20 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:53:18.220257042 +0000 UTC m=+194.030087900" watchObservedRunningTime="2026-01-20 14:53:18.226972523 +0000 UTC m=+194.036803391" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.426292 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.521514 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") pod \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.521586 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") pod \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\" (UID: \"45fa4b40-fafa-4aad-ac39-41cc0503c52a\") " Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.522922 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45fa4b40-fafa-4aad-ac39-41cc0503c52a" (UID: "45fa4b40-fafa-4aad-ac39-41cc0503c52a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.529749 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45fa4b40-fafa-4aad-ac39-41cc0503c52a" (UID: "45fa4b40-fafa-4aad-ac39-41cc0503c52a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.623165 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:18 crc kubenswrapper[4949]: I0120 14:53:18.623216 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45fa4b40-fafa-4aad-ac39-41cc0503c52a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:19 crc kubenswrapper[4949]: I0120 14:53:19.208946 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"45fa4b40-fafa-4aad-ac39-41cc0503c52a","Type":"ContainerDied","Data":"64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce"} Jan 20 14:53:19 crc kubenswrapper[4949]: I0120 14:53:19.208990 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fa7ad4606e3f30b5211ebdfd6ef8ed55005e86555293bfc00edbdc2f0048ce" Jan 20 14:53:19 crc kubenswrapper[4949]: I0120 14:53:19.209049 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.152381 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.152903 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.152942 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.153433 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 14:53:27 crc kubenswrapper[4949]: I0120 14:53:27.153537 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28" gracePeriod=600 Jan 20 14:53:28 crc kubenswrapper[4949]: I0120 14:53:28.255803 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28" exitCode=0 Jan 20 14:53:28 crc kubenswrapper[4949]: I0120 14:53:28.255847 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.293083 4949 generic.go:334] "Generic (PLEG): container finished" podID="78cf28ec-e605-49c2-882a-5cb98697605b" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.293171 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.296292 4949 generic.go:334] "Generic (PLEG): container finished" podID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerID="adcd1b226c49fdd50a51858d8d3008d7b1270b1c8bb63285e139f1716bbba323" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.296341 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"adcd1b226c49fdd50a51858d8d3008d7b1270b1c8bb63285e139f1716bbba323"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.301825 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerStarted","Data":"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.308260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerStarted","Data":"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.311184 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerStarted","Data":"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.313291 4949 generic.go:334] "Generic (PLEG): container finished" podID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerID="d6c186f2453346c7f234db0ae0179a8ca36fa49fbd7dc725635ea4fc974b9ba8" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.313371 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"d6c186f2453346c7f234db0ae0179a8ca36fa49fbd7dc725635ea4fc974b9ba8"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.315705 4949 generic.go:334] "Generic (PLEG): container finished" podID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" exitCode=0 Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.315773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.334045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d"} Jan 20 14:53:35 crc kubenswrapper[4949]: I0120 14:53:35.336401 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerStarted","Data":"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.345107 4949 generic.go:334] "Generic (PLEG): container finished" podID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.345187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.348446 4949 generic.go:334] "Generic (PLEG): container finished" podID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.349599 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.351842 4949 generic.go:334] "Generic (PLEG): container finished" podID="2747a148-c24a-4d08-a2ca-19261c14c359" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.351910 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.354086 4949 generic.go:334] "Generic (PLEG): container finished" podID="13eef670-55b3-4832-a856-fe2bf8239996" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" exitCode=0 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.354126 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.361879 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerStarted","Data":"e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.366366 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerStarted","Data":"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.368600 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerStarted","Data":"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.370771 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerStarted","Data":"fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03"} Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.412133 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhc4n" podStartSLOduration=2.120435396 podStartE2EDuration="1m5.412115137s" podCreationTimestamp="2026-01-20 14:52:31 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.516934802 +0000 UTC m=+148.326765660" lastFinishedPulling="2026-01-20 14:53:35.808614543 +0000 UTC m=+211.618445401" observedRunningTime="2026-01-20 14:53:36.409031016 +0000 UTC m=+212.218861874" watchObservedRunningTime="2026-01-20 14:53:36.412115137 +0000 UTC m=+212.221945995" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.426974 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" containerID="cri-o://244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" gracePeriod=15 Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.472109 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jpqvc" podStartSLOduration=3.247475246 podStartE2EDuration="1m6.472090356s" podCreationTimestamp="2026-01-20 14:52:30 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.492652395 +0000 UTC m=+148.302483253" lastFinishedPulling="2026-01-20 14:53:35.717267505 +0000 UTC m=+211.527098363" observedRunningTime="2026-01-20 14:53:36.468264178 +0000 UTC m=+212.278095036" watchObservedRunningTime="2026-01-20 14:53:36.472090356 +0000 UTC m=+212.281921224" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.486202 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwbjk" podStartSLOduration=2.2273982549999998 podStartE2EDuration="1m3.486185863s" podCreationTimestamp="2026-01-20 14:52:33 +0000 UTC" firstStartedPulling="2026-01-20 14:52:34.663280949 +0000 UTC m=+150.473111807" lastFinishedPulling="2026-01-20 14:53:35.922068557 +0000 UTC m=+211.731899415" observedRunningTime="2026-01-20 14:53:36.482608284 +0000 UTC m=+212.292439142" watchObservedRunningTime="2026-01-20 14:53:36.486185863 +0000 UTC m=+212.296016721" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.500509 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5llwq" podStartSLOduration=2.191202157 podStartE2EDuration="1m5.500486378s" podCreationTimestamp="2026-01-20 14:52:31 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.526884474 +0000 UTC m=+148.336715332" lastFinishedPulling="2026-01-20 14:53:35.836168695 +0000 UTC m=+211.645999553" observedRunningTime="2026-01-20 14:53:36.498353941 +0000 UTC m=+212.308184809" watchObservedRunningTime="2026-01-20 14:53:36.500486378 +0000 UTC m=+212.310317236" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.780978 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.814230 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-84cc499644-bbq66"] Jan 20 14:53:36 crc kubenswrapper[4949]: E0120 14:53:36.814727 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.814808 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" Jan 20 14:53:36 crc kubenswrapper[4949]: E0120 14:53:36.814884 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerName="pruner" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816188 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerName="pruner" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816425 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fa4b40-fafa-4aad-ac39-41cc0503c52a" containerName="pruner" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816508 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerName="oauth-openshift" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.816997 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.858368 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84cc499644-bbq66"] Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.869931 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.869966 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.869990 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870151 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870210 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870232 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870267 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870298 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870315 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870333 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870367 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870389 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") pod \"45bacc20-7998-4250-bbd3-fd1d24741ea7\" (UID: \"45bacc20-7998-4250-bbd3-fd1d24741ea7\") " Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870533 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mj4h\" (UniqueName: \"kubernetes.io/projected/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-kube-api-access-8mj4h\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870561 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870581 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-session\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870641 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870657 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-dir\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-error\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-login\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870712 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-policies\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.870797 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.871148 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.871866 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.872194 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.872721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.872830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.876188 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.876534 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.877580 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.877776 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.878026 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.878094 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.879507 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns" (OuterVolumeSpecName: "kube-api-access-scxns") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "kube-api-access-scxns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.879639 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.891985 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "45bacc20-7998-4250-bbd3-fd1d24741ea7" (UID: "45bacc20-7998-4250-bbd3-fd1d24741ea7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.972822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mj4h\" (UniqueName: \"kubernetes.io/projected/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-kube-api-access-8mj4h\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.972969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.972997 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-session\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973014 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973061 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973079 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-dir\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-error\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973132 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973148 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-login\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973166 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973184 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973215 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-policies\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973241 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973298 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973308 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973319 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973330 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973339 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scxns\" (UniqueName: \"kubernetes.io/projected/45bacc20-7998-4250-bbd3-fd1d24741ea7-kube-api-access-scxns\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973348 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973358 4949 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973366 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973375 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973384 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973393 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973402 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973413 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973422 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45bacc20-7998-4250-bbd3-fd1d24741ea7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.973658 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-dir\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.974728 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.975456 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-audit-policies\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.976011 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.977137 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.978282 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-login\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.979895 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.982866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-error\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.982975 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-session\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983156 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983428 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.983921 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:36 crc kubenswrapper[4949]: I0120 14:53:36.999869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mj4h\" (UniqueName: \"kubernetes.io/projected/f9029eb7-d052-4ee9-a01a-3bef83bcf99c-kube-api-access-8mj4h\") pod \"oauth-openshift-84cc499644-bbq66\" (UID: \"f9029eb7-d052-4ee9-a01a-3bef83bcf99c\") " pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.136198 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.356868 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84cc499644-bbq66"] Jan 20 14:53:37 crc kubenswrapper[4949]: W0120 14:53:37.361635 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9029eb7_d052_4ee9_a01a_3bef83bcf99c.slice/crio-0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9 WatchSource:0}: Error finding container 0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9: Status 404 returned error can't find the container with id 0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9 Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.396555 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerStarted","Data":"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.397322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" event={"ID":"f9029eb7-d052-4ee9-a01a-3bef83bcf99c","Type":"ContainerStarted","Data":"0707e5f6404816d7de4b2cec7591bc0b8ce1d091d315982142ca611079d790a9"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400075 4949 generic.go:334] "Generic (PLEG): container finished" podID="45bacc20-7998-4250-bbd3-fd1d24741ea7" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" exitCode=0 Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400114 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerDied","Data":"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" event={"ID":"45bacc20-7998-4250-bbd3-fd1d24741ea7","Type":"ContainerDied","Data":"d81048ba925a2afe07b7979e16e8232a499fa207550149cb307eb7b531aa376f"} Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400154 4949 scope.go:117] "RemoveContainer" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.400299 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-brlp7" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.428968 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.434892 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-brlp7"] Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.451124 4949 scope.go:117] "RemoveContainer" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" Jan 20 14:53:37 crc kubenswrapper[4949]: E0120 14:53:37.451891 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a\": container with ID starting with 244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a not found: ID does not exist" containerID="244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a" Jan 20 14:53:37 crc kubenswrapper[4949]: I0120 14:53:37.451929 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a"} err="failed to get container status \"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a\": rpc error: code = NotFound desc = could not find container \"244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a\": container with ID starting with 244dd3846518c2a098188c6ea1c65311d769e4c69c1f0e596bcac9c1262aee3a not found: ID does not exist" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.413476 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerStarted","Data":"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.415207 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerStarted","Data":"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.418157 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerStarted","Data":"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.419629 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" event={"ID":"f9029eb7-d052-4ee9-a01a-3bef83bcf99c","Type":"ContainerStarted","Data":"c54c6c4f3af2abe5a9933a85f07ad19a7d6ba1c7790d6aa7bce1393fcc21b177"} Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.419774 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.423980 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.431080 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sr2h8" podStartSLOduration=2.902769578 podStartE2EDuration="1m7.431065421s" podCreationTimestamp="2026-01-20 14:52:31 +0000 UTC" firstStartedPulling="2026-01-20 14:52:32.496569081 +0000 UTC m=+148.306399939" lastFinishedPulling="2026-01-20 14:53:37.024864924 +0000 UTC m=+212.834695782" observedRunningTime="2026-01-20 14:53:38.430735299 +0000 UTC m=+214.240566177" watchObservedRunningTime="2026-01-20 14:53:38.431065421 +0000 UTC m=+214.240896279" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.449822 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-84cc499644-bbq66" podStartSLOduration=27.449802167 podStartE2EDuration="27.449802167s" podCreationTimestamp="2026-01-20 14:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:53:38.447033557 +0000 UTC m=+214.256864405" watchObservedRunningTime="2026-01-20 14:53:38.449802167 +0000 UTC m=+214.259633025" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.473431 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7gt2" podStartSLOduration=3.281834644 podStartE2EDuration="1m4.473412966s" podCreationTimestamp="2026-01-20 14:52:34 +0000 UTC" firstStartedPulling="2026-01-20 14:52:35.708759169 +0000 UTC m=+151.518590027" lastFinishedPulling="2026-01-20 14:53:36.900337481 +0000 UTC m=+212.710168349" observedRunningTime="2026-01-20 14:53:38.473318523 +0000 UTC m=+214.283149381" watchObservedRunningTime="2026-01-20 14:53:38.473412966 +0000 UTC m=+214.283243824" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.519567 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6ccj" podStartSLOduration=3.479045637 podStartE2EDuration="1m6.519548017s" podCreationTimestamp="2026-01-20 14:52:32 +0000 UTC" firstStartedPulling="2026-01-20 14:52:34.603871873 +0000 UTC m=+150.413702731" lastFinishedPulling="2026-01-20 14:53:37.644374253 +0000 UTC m=+213.454205111" observedRunningTime="2026-01-20 14:53:38.499035408 +0000 UTC m=+214.308866266" watchObservedRunningTime="2026-01-20 14:53:38.519548017 +0000 UTC m=+214.329378875" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.520996 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qnf74" podStartSLOduration=3.218602509 podStartE2EDuration="1m4.520990879s" podCreationTimestamp="2026-01-20 14:52:34 +0000 UTC" firstStartedPulling="2026-01-20 14:52:35.72100363 +0000 UTC m=+151.530834488" lastFinishedPulling="2026-01-20 14:53:37.023392 +0000 UTC m=+212.833222858" observedRunningTime="2026-01-20 14:53:38.519648571 +0000 UTC m=+214.329479429" watchObservedRunningTime="2026-01-20 14:53:38.520990879 +0000 UTC m=+214.330821737" Jan 20 14:53:38 crc kubenswrapper[4949]: I0120 14:53:38.795662 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bacc20-7998-4250-bbd3-fd1d24741ea7" path="/var/lib/kubelet/pods/45bacc20-7998-4250-bbd3-fd1d24741ea7/volumes" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.261128 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.261538 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.362990 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.469847 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.469898 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.479207 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.545030 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.701889 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.702156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.737145 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.871809 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.872061 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:41 crc kubenswrapper[4949]: I0120 14:53:41.909952 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:42 crc kubenswrapper[4949]: I0120 14:53:42.491531 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:42 crc kubenswrapper[4949]: I0120 14:53:42.494447 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:53:42 crc kubenswrapper[4949]: I0120 14:53:42.507825 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.279136 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.279194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.335870 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.499850 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.664999 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.665050 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:43 crc kubenswrapper[4949]: I0120 14:53:43.716012 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.477543 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.477948 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.497783 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.535335 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.620095 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.882318 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:44 crc kubenswrapper[4949]: I0120 14:53:44.882709 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.460084 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5llwq" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" containerID="cri-o://fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" gracePeriod=2 Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.502760 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.619127 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.619619 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhc4n" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" containerID="cri-o://e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe" gracePeriod=2 Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.797032 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.907024 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") pod \"df1fd354-0dd7-4186-b8f7-eb06991f4632\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.907127 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") pod \"df1fd354-0dd7-4186-b8f7-eb06991f4632\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.907206 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") pod \"df1fd354-0dd7-4186-b8f7-eb06991f4632\" (UID: \"df1fd354-0dd7-4186-b8f7-eb06991f4632\") " Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.908868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities" (OuterVolumeSpecName: "utilities") pod "df1fd354-0dd7-4186-b8f7-eb06991f4632" (UID: "df1fd354-0dd7-4186-b8f7-eb06991f4632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.913794 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v" (OuterVolumeSpecName: "kube-api-access-89v9v") pod "df1fd354-0dd7-4186-b8f7-eb06991f4632" (UID: "df1fd354-0dd7-4186-b8f7-eb06991f4632"). InnerVolumeSpecName "kube-api-access-89v9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.926225 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" probeResult="failure" output=< Jan 20 14:53:45 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 14:53:45 crc kubenswrapper[4949]: > Jan 20 14:53:45 crc kubenswrapper[4949]: I0120 14:53:45.955343 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df1fd354-0dd7-4186-b8f7-eb06991f4632" (UID: "df1fd354-0dd7-4186-b8f7-eb06991f4632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.009053 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.009097 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89v9v\" (UniqueName: \"kubernetes.io/projected/df1fd354-0dd7-4186-b8f7-eb06991f4632-kube-api-access-89v9v\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.009108 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1fd354-0dd7-4186-b8f7-eb06991f4632-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.474005 4949 generic.go:334] "Generic (PLEG): container finished" podID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerID="e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe" exitCode=0 Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.474120 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe"} Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477599 4949 generic.go:334] "Generic (PLEG): container finished" podID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" exitCode=0 Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477689 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35"} Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477727 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5llwq" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477767 4949 scope.go:117] "RemoveContainer" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.477751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5llwq" event={"ID":"df1fd354-0dd7-4186-b8f7-eb06991f4632","Type":"ContainerDied","Data":"e348fe1ab134c2f2dce8a8e0b683563d2a0af2429ee1af4e99981d29b40cdee3"} Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.511969 4949 scope.go:117] "RemoveContainer" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.520404 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.523774 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5llwq"] Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.550206 4949 scope.go:117] "RemoveContainer" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.569191 4949 scope.go:117] "RemoveContainer" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" Jan 20 14:53:46 crc kubenswrapper[4949]: E0120 14:53:46.570068 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35\": container with ID starting with fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35 not found: ID does not exist" containerID="fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570105 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35"} err="failed to get container status \"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35\": rpc error: code = NotFound desc = could not find container \"fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35\": container with ID starting with fbf5a8d6ce9a261cf6636d93d84a569f2f755ace5fe7ff6d27b4519456d71c35 not found: ID does not exist" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570132 4949 scope.go:117] "RemoveContainer" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" Jan 20 14:53:46 crc kubenswrapper[4949]: E0120 14:53:46.570681 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30\": container with ID starting with b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30 not found: ID does not exist" containerID="b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570720 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30"} err="failed to get container status \"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30\": rpc error: code = NotFound desc = could not find container \"b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30\": container with ID starting with b783fdf44acd6c7d79d24bb4f91d95cb56a817081c3c83bf30c00d1777897f30 not found: ID does not exist" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.570749 4949 scope.go:117] "RemoveContainer" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" Jan 20 14:53:46 crc kubenswrapper[4949]: E0120 14:53:46.573895 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90\": container with ID starting with 41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90 not found: ID does not exist" containerID="41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.573936 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90"} err="failed to get container status \"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90\": rpc error: code = NotFound desc = could not find container \"41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90\": container with ID starting with 41a40b331b8fc5239b686654614e96c14aa4f27c1d89e3ae9111a056da80eb90 not found: ID does not exist" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.574190 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.717205 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") pod \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.717253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") pod \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.717362 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") pod \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\" (UID: \"7bad3a1d-1239-429c-b5a5-96f0bc2570ad\") " Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.719179 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities" (OuterVolumeSpecName: "utilities") pod "7bad3a1d-1239-429c-b5a5-96f0bc2570ad" (UID: "7bad3a1d-1239-429c-b5a5-96f0bc2570ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.722248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r" (OuterVolumeSpecName: "kube-api-access-89h6r") pod "7bad3a1d-1239-429c-b5a5-96f0bc2570ad" (UID: "7bad3a1d-1239-429c-b5a5-96f0bc2570ad"). InnerVolumeSpecName "kube-api-access-89h6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.758648 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bad3a1d-1239-429c-b5a5-96f0bc2570ad" (UID: "7bad3a1d-1239-429c-b5a5-96f0bc2570ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.797940 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" path="/var/lib/kubelet/pods/df1fd354-0dd7-4186-b8f7-eb06991f4632/volumes" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.818622 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.818661 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89h6r\" (UniqueName: \"kubernetes.io/projected/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-kube-api-access-89h6r\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:46 crc kubenswrapper[4949]: I0120 14:53:46.818671 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bad3a1d-1239-429c-b5a5-96f0bc2570ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.022912 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.023247 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwbjk" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" containerID="cri-o://fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03" gracePeriod=2 Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.484617 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhc4n" event={"ID":"7bad3a1d-1239-429c-b5a5-96f0bc2570ad","Type":"ContainerDied","Data":"461db7652293e0c019275b02f84835686fdbeffea7ab03b5ba355fd27be457ec"} Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.484632 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhc4n" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.485033 4949 scope.go:117] "RemoveContainer" containerID="e45c6b4f92bb83f800ec9381216ee31731987b1d3df5c1a59a156d29ee8e3ffe" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.489788 4949 generic.go:334] "Generic (PLEG): container finished" podID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerID="fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03" exitCode=0 Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.489834 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03"} Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.501565 4949 scope.go:117] "RemoveContainer" containerID="d6c186f2453346c7f234db0ae0179a8ca36fa49fbd7dc725635ea4fc974b9ba8" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.507422 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.511298 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhc4n"] Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.520717 4949 scope.go:117] "RemoveContainer" containerID="e457b0da5f8d7f599c13928f4a9416d0d3623297c6f14359bad682b4ffdc7a4a" Jan 20 14:53:47 crc kubenswrapper[4949]: I0120 14:53:47.888064 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.037214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") pod \"461b9e2b-6f01-4719-946b-3c8266281ea4\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.037304 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") pod \"461b9e2b-6f01-4719-946b-3c8266281ea4\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.037411 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") pod \"461b9e2b-6f01-4719-946b-3c8266281ea4\" (UID: \"461b9e2b-6f01-4719-946b-3c8266281ea4\") " Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.038868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities" (OuterVolumeSpecName: "utilities") pod "461b9e2b-6f01-4719-946b-3c8266281ea4" (UID: "461b9e2b-6f01-4719-946b-3c8266281ea4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.045688 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8" (OuterVolumeSpecName: "kube-api-access-skxx8") pod "461b9e2b-6f01-4719-946b-3c8266281ea4" (UID: "461b9e2b-6f01-4719-946b-3c8266281ea4"). InnerVolumeSpecName "kube-api-access-skxx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.061187 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "461b9e2b-6f01-4719-946b-3c8266281ea4" (UID: "461b9e2b-6f01-4719-946b-3c8266281ea4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.139484 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skxx8\" (UniqueName: \"kubernetes.io/projected/461b9e2b-6f01-4719-946b-3c8266281ea4-kube-api-access-skxx8\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.139584 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.139605 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461b9e2b-6f01-4719-946b-3c8266281ea4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.499428 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwbjk" event={"ID":"461b9e2b-6f01-4719-946b-3c8266281ea4","Type":"ContainerDied","Data":"30a76834740fb17389d4718b7b04b96d874c290be714a19e58cb218d3172d38f"} Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.499578 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwbjk" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.499764 4949 scope.go:117] "RemoveContainer" containerID="fa32cab622616b956be08021842c2cf0ec7151dd32fc0f3fc19fb0fd5e936c03" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.513382 4949 scope.go:117] "RemoveContainer" containerID="adcd1b226c49fdd50a51858d8d3008d7b1270b1c8bb63285e139f1716bbba323" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.526777 4949 scope.go:117] "RemoveContainer" containerID="ce75b13bd7f1b8e95f0b7ca8644b4475c13ac79f0a7f60da9f3dac9e11e95a9e" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.577342 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.611404 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwbjk"] Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.797960 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" path="/var/lib/kubelet/pods/461b9e2b-6f01-4719-946b-3c8266281ea4/volumes" Jan 20 14:53:48 crc kubenswrapper[4949]: I0120 14:53:48.799825 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" path="/var/lib/kubelet/pods/7bad3a1d-1239-429c-b5a5-96f0bc2570ad/volumes" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835170 4949 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835831 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835847 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835861 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835869 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835881 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835889 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835904 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835912 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835920 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835928 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835937 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835944 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="extract-content" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835953 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835960 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835971 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.835980 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="extract-utilities" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.835993 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836001 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836125 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1fd354-0dd7-4186-b8f7-eb06991f4632" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836141 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bad3a1d-1239-429c-b5a5-96f0bc2570ad" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836159 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="461b9e2b-6f01-4719-946b-3c8266281ea4" containerName="registry-server" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836673 4949 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836819 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.836852 4949 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837002 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837025 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837105 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837164 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837191 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3" gracePeriod=15 Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837430 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837680 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837705 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837713 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837745 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837755 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837766 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837773 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837789 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837871 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 14:53:54 crc kubenswrapper[4949]: E0120 14:53:54.837887 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.837894 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838032 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838045 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838059 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838070 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.838083 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.839862 4949 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.876053 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922201 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922255 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922275 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922307 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922343 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922434 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.922470 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.942824 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.944066 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.944230 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.984305 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.984941 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:54 crc kubenswrapper[4949]: I0120 14:53:54.985295 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023478 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023585 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023663 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023673 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023743 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023742 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023790 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023950 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.023986 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.024075 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.024102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.173042 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:53:55 crc kubenswrapper[4949]: W0120 14:53:55.197712 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471 WatchSource:0}: Error finding container 9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471: Status 404 returned error can't find the container with id 9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471 Jan 20 14:53:55 crc kubenswrapper[4949]: E0120 14:53:55.204980 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c7820a1e677d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,LastTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.537639 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.538582 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3" exitCode=2 Jan 20 14:53:55 crc kubenswrapper[4949]: I0120 14:53:55.539945 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9ec4db1631e16dc8853c6af9c177b0ba181f8a98d3a03603dce7e25f7aac9471"} Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.550298 4949 generic.go:334] "Generic (PLEG): container finished" podID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerID="2962c26b419791e2bd3317ef6ff1beb0505dcf5c1382dca84f101cbe4881711f" exitCode=0 Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.550436 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerDied","Data":"2962c26b419791e2bd3317ef6ff1beb0505dcf5c1382dca84f101cbe4881711f"} Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.551630 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.552184 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.552560 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.555574 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.556308 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475" exitCode=0 Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.556347 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37" exitCode=0 Jan 20 14:53:56 crc kubenswrapper[4949]: I0120 14:53:56.556367 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153" exitCode=0 Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.564373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57"} Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.565140 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.565930 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.566671 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.569189 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.570344 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8" exitCode=0 Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.779374 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.780658 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.781036 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.781371 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.785335 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.786381 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.786752 4949 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.787020 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.787307 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.787585 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.859870 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") pod \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.859987 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") pod \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860044 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.859980 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock" (OuterVolumeSpecName: "var-lock") pod "7865dcac-fc72-4c7f-bd57-11f1c3bbb404" (UID: "7865dcac-fc72-4c7f-bd57-11f1c3bbb404"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860104 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860083 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7865dcac-fc72-4c7f-bd57-11f1c3bbb404" (UID: "7865dcac-fc72-4c7f-bd57-11f1c3bbb404"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860112 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860137 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") pod \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\" (UID: \"7865dcac-fc72-4c7f-bd57-11f1c3bbb404\") " Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860085 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860387 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860410 4949 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860422 4949 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860433 4949 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.860422 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.866960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7865dcac-fc72-4c7f-bd57-11f1c3bbb404" (UID: "7865dcac-fc72-4c7f-bd57-11f1c3bbb404"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.961686 4949 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:57 crc kubenswrapper[4949]: I0120 14:53:57.961715 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7865dcac-fc72-4c7f-bd57-11f1c3bbb404-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.578557 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7865dcac-fc72-4c7f-bd57-11f1c3bbb404","Type":"ContainerDied","Data":"88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758"} Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.578627 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88edf5361ebaf1723f5fbd2c9d545afc1d0c61aa601b197385b040f19cdc5758" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.578675 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.582890 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.584669 4949 scope.go:117] "RemoveContainer" containerID="06d575c71421886baf5564613694355bab5b5c99af0c972ba8d858ba5a754475" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.584677 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.591684 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.592120 4949 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.592644 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.593134 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.602048 4949 scope.go:117] "RemoveContainer" containerID="04f0ffe01bc6407c7b485ab9d49ec270850449c9c451061fe4f8f52347bb4f37" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.609073 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.609503 4949 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.609798 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.610012 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.617388 4949 scope.go:117] "RemoveContainer" containerID="7d29456136efade7f29c3b64051bbe85b9cb05e1e954af1880d31d5b5ee98153" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.629685 4949 scope.go:117] "RemoveContainer" containerID="903b1b18baf8eaa93af4bbd4f10b8b5b02d6ff5ddbc7e912bd59413d6cdd32a3" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.640091 4949 scope.go:117] "RemoveContainer" containerID="345d64d6e27013c5a8b9ab2da2b24cd539452ae9945aa5e08c7158a1cdd8e8b8" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.653191 4949 scope.go:117] "RemoveContainer" containerID="720c7ea0b9933576dd58bc89f11a3eb4d33200663c486068445d5d7db68d6021" Jan 20 14:53:58 crc kubenswrapper[4949]: I0120 14:53:58.797347 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 20 14:53:59 crc kubenswrapper[4949]: E0120 14:53:59.906013 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.41:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c7820a1e677d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,LastTimestamp:2026-01-20 14:53:55.200284633 +0000 UTC m=+231.010115491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.335249 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T14:54:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336140 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336330 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336466 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336648 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:02 crc kubenswrapper[4949]: E0120 14:54:02.336660 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.791189 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.795129 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.795696 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.977691 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.978308 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.978656 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.979080 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.979359 4949 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:04 crc kubenswrapper[4949]: I0120 14:54:04.979385 4949 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 14:54:04 crc kubenswrapper[4949]: E0120 14:54:04.979804 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="200ms" Jan 20 14:54:05 crc kubenswrapper[4949]: E0120 14:54:05.180671 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="400ms" Jan 20 14:54:05 crc kubenswrapper[4949]: E0120 14:54:05.581692 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="800ms" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.788407 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.789257 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.791087 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.791529 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.812425 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.812697 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:05 crc kubenswrapper[4949]: E0120 14:54:05.813153 4949 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:05 crc kubenswrapper[4949]: I0120 14:54:05.813806 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:05 crc kubenswrapper[4949]: W0120 14:54:05.838168 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23 WatchSource:0}: Error finding container 2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23: Status 404 returned error can't find the container with id 2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23 Jan 20 14:54:06 crc kubenswrapper[4949]: E0120 14:54:06.383652 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.41:6443: connect: connection refused" interval="1.6s" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.634967 4949 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d3dd0c06d401501579935b66e49654c3411ad33eea41fd49d5f4d6cfd85b87e3" exitCode=0 Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635016 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d3dd0c06d401501579935b66e49654c3411ad33eea41fd49d5f4d6cfd85b87e3"} Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635048 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2983310a42b15219728f319013942b24154fe890aa51f9d84a23329bd4758d23"} Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635336 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635350 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:06 crc kubenswrapper[4949]: E0120 14:54:06.635812 4949 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.635848 4949 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.636330 4949 status_manager.go:851] "Failed to get status for pod" podUID="13eef670-55b3-4832-a856-fe2bf8239996" pod="openshift-marketplace/redhat-operators-qnf74" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qnf74\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:06 crc kubenswrapper[4949]: I0120 14:54:06.636674 4949 status_manager.go:851] "Failed to get status for pod" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.41:6443: connect: connection refused" Jan 20 14:54:07 crc kubenswrapper[4949]: I0120 14:54:07.652262 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"27bec0e4741efd495c8af96759bd72a036e9c4d7ba91f8b673ab18784c51a5bf"} Jan 20 14:54:07 crc kubenswrapper[4949]: I0120 14:54:07.652741 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2a4b029a423926f16871a3e9269a8618ea80179ba2a549951c5ff5f6bb110614"} Jan 20 14:54:07 crc kubenswrapper[4949]: I0120 14:54:07.652756 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"069731a5c7ef2920f465f5e2789ae4bce0b67035ef4b27fcc6457318153b6cdc"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661279 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ace5f3bbe362cb444c944705d47bf10d9deda50903b7882c1a94140063332645"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661351 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f4abaa5724ea799ffe29595239d8d4210fa0cd88877905e6722cffe5ce5f2ec"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661464 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661698 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.661734 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.665787 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.665861 4949 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981" exitCode=1 Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.665899 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981"} Jan 20 14:54:08 crc kubenswrapper[4949]: I0120 14:54:08.666642 4949 scope.go:117] "RemoveContainer" containerID="97224e07abaad363b4be6cdb87d538e331df0fce5f56e622aba4cb26e8e2a981" Jan 20 14:54:09 crc kubenswrapper[4949]: I0120 14:54:09.675127 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 14:54:09 crc kubenswrapper[4949]: I0120 14:54:09.675810 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b9354d33fb531351a332748f858821a6ead62d29529834db537a435a144f7ee4"} Jan 20 14:54:10 crc kubenswrapper[4949]: I0120 14:54:10.814748 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:10 crc kubenswrapper[4949]: I0120 14:54:10.814802 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:10 crc kubenswrapper[4949]: I0120 14:54:10.822455 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:11 crc kubenswrapper[4949]: I0120 14:54:11.305908 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:13 crc kubenswrapper[4949]: I0120 14:54:13.693794 4949 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.700000 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.700030 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.705110 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:14 crc kubenswrapper[4949]: I0120 14:54:14.811982 4949 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8985c76-0d54-4daf-bf6c-39514ca3a750" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.704893 4949 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.705243 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad1290ba-8b84-450b-8b26-3b8e962aef5e" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.707650 4949 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8985c76-0d54-4daf-bf6c-39514ca3a750" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.755071 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:15 crc kubenswrapper[4949]: I0120 14:54:15.759863 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.037698 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.192840 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.468301 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 14:54:20 crc kubenswrapper[4949]: I0120 14:54:20.710375 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.006993 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.302046 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.723701 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.865757 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:54:21 crc kubenswrapper[4949]: I0120 14:54:21.967973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.039771 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.120928 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.686698 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 14:54:22 crc kubenswrapper[4949]: I0120 14:54:22.976164 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.104175 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.164995 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.332435 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.551786 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.688703 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.758070 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 14:54:23 crc kubenswrapper[4949]: I0120 14:54:23.762878 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 14:54:24 crc kubenswrapper[4949]: I0120 14:54:24.167314 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 14:54:24 crc kubenswrapper[4949]: I0120 14:54:24.707408 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 14:54:24 crc kubenswrapper[4949]: I0120 14:54:24.780102 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.028313 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.354971 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.515925 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.563552 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 14:54:25 crc kubenswrapper[4949]: I0120 14:54:25.750978 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.263399 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.564296 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.745983 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.783601 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.797867 4949 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 14:54:26 crc kubenswrapper[4949]: I0120 14:54:26.895679 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.368318 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.460277 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.614504 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.687381 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.700180 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.701447 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.890598 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 14:54:27 crc kubenswrapper[4949]: I0120 14:54:27.905066 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.083393 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.208810 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.303369 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.348396 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.462419 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.483948 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.507119 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.528932 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.709027 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.743606 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.912668 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.930649 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 14:54:28 crc kubenswrapper[4949]: I0120 14:54:28.978110 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.061695 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.071618 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.165342 4949 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.226777 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.300491 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.386754 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.442765 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.563973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.629405 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.688825 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.776936 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.823124 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.870153 4949 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:29 crc kubenswrapper[4949]: I0120 14:54:29.948664 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.037878 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.057861 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.085548 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.171268 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.177228 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.286039 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.468603 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.500258 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.508489 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.514096 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.653802 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.777369 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.862978 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 14:54:30 crc kubenswrapper[4949]: I0120 14:54:30.867183 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.144377 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.186675 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.234209 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.332994 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.410732 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.459479 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.565431 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.571836 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.652084 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.683827 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.790457 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.881380 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.929319 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.939854 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.954780 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 14:54:31 crc kubenswrapper[4949]: I0120 14:54:31.989994 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.021991 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.048625 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.200954 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.287677 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.303112 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.351318 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.432649 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.433741 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.472492 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.488006 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.497697 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.554049 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.611883 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.653434 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.739369 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.772776 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.793856 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.834626 4949 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.838535 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.838501428 podStartE2EDuration="38.838501428s" podCreationTimestamp="2026-01-20 14:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:54:13.724078784 +0000 UTC m=+249.533909642" watchObservedRunningTime="2026-01-20 14:54:32.838501428 +0000 UTC m=+268.648332296" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.840105 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.840155 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.844566 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.863574 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.863555209 podStartE2EDuration="19.863555209s" podCreationTimestamp="2026-01-20 14:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:54:32.862603248 +0000 UTC m=+268.672434146" watchObservedRunningTime="2026-01-20 14:54:32.863555209 +0000 UTC m=+268.673386067" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.904858 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.936694 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.942699 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 14:54:32 crc kubenswrapper[4949]: I0120 14:54:32.966040 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.265261 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.279038 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.291256 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.626940 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.664955 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.730639 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.739979 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.797239 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.921343 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 14:54:33 crc kubenswrapper[4949]: I0120 14:54:33.928423 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.070345 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.101905 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.119334 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.161480 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.285533 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.303500 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.473604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.605810 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.689851 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.968151 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.968570 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 14:54:34 crc kubenswrapper[4949]: I0120 14:54:34.972742 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.038798 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.054174 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.153750 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.170303 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.197167 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.277714 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.322230 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.341415 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.369637 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.454828 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.512099 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.527139 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.529033 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.562242 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.688011 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.797075 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.799476 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.803366 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.822912 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.854247 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.869321 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.875251 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.892779 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.962008 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.963469 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 14:54:35 crc kubenswrapper[4949]: I0120 14:54:35.979454 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.002291 4949 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.002555 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" gracePeriod=5 Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.259296 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.268807 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.350508 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.353957 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.364585 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.667243 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.716247 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.791964 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 14:54:36 crc kubenswrapper[4949]: I0120 14:54:36.881968 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.035044 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.054501 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.057363 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.064547 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.083089 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.128275 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.258477 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.385428 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.386658 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.468847 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.481450 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.553894 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.675601 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.844760 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.888366 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.919747 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 14:54:37 crc kubenswrapper[4949]: I0120 14:54:37.930592 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.028239 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.028805 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.056352 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.160133 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.162252 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.174563 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.233286 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.248479 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.270015 4949 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.270360 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.387249 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.388227 4949 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.496725 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.517340 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.740995 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.786308 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.843661 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 14:54:38 crc kubenswrapper[4949]: I0120 14:54:38.997824 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.003240 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.092481 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.156769 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.284963 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.304384 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.482171 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.559086 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.567659 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.634178 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.664574 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.724073 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.858035 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.871372 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 14:54:39 crc kubenswrapper[4949]: I0120 14:54:39.873622 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.047656 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.050274 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.274017 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.279219 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.527077 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.548636 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.684248 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.796370 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.801111 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 14:54:40 crc kubenswrapper[4949]: I0120 14:54:40.837708 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.024208 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.114335 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.221002 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.574277 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.574359 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639181 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639248 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639277 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639309 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639316 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639331 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639375 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639382 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639622 4949 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639634 4949 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639641 4949 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.639651 4949 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.648214 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.741221 4949 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.773292 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.831444 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846089 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846196 4949 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" exitCode=137 Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846252 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.846270 4949 scope.go:117] "RemoveContainer" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.862252 4949 scope.go:117] "RemoveContainer" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" Jan 20 14:54:41 crc kubenswrapper[4949]: E0120 14:54:41.862686 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57\": container with ID starting with ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57 not found: ID does not exist" containerID="ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.862737 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57"} err="failed to get container status \"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57\": rpc error: code = NotFound desc = could not find container \"ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57\": container with ID starting with ab9b2862a0a1a7839a3fb35c24f930d97c74f33ad35278a0a29a640348785c57 not found: ID does not exist" Jan 20 14:54:41 crc kubenswrapper[4949]: I0120 14:54:41.947715 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.120097 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.215375 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.259761 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.263771 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.271566 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.319183 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.585558 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.796434 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.796690 4949 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.805475 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.805530 4949 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c92e26c2-437f-456e-815e-341333febbaa" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.809142 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.809184 4949 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c92e26c2-437f-456e-815e-341333febbaa" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.866491 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 14:54:42 crc kubenswrapper[4949]: I0120 14:54:42.949372 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 14:54:43 crc kubenswrapper[4949]: I0120 14:54:43.507011 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.358070 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.359133 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sr2h8" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" containerID="cri-o://09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.378104 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.378465 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jpqvc" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" containerID="cri-o://0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.391285 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.391583 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" containerID="cri-o://7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.397812 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.398251 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n6ccj" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" containerID="cri-o://469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.407676 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.410817 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7gt2" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" containerID="cri-o://c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.423387 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.423698 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qnf74" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" containerID="cri-o://98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" gracePeriod=30 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435283 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnrps"] Jan 20 14:54:48 crc kubenswrapper[4949]: E0120 14:54:48.435666 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435688 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 14:54:48 crc kubenswrapper[4949]: E0120 14:54:48.435724 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerName="installer" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435738 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerName="installer" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435940 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7865dcac-fc72-4c7f-bd57-11f1c3bbb404" containerName="installer" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.435999 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.436657 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.443384 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnrps"] Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.531007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.531064 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.531166 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvdr\" (UniqueName: \"kubernetes.io/projected/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-kube-api-access-mkvdr\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.632240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvdr\" (UniqueName: \"kubernetes.io/projected/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-kube-api-access-mkvdr\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.632649 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.632682 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.637069 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.640632 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.650899 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvdr\" (UniqueName: \"kubernetes.io/projected/e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74-kube-api-access-mkvdr\") pod \"marketplace-operator-79b997595-cnrps\" (UID: \"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74\") " pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.868434 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.869130 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.883626 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894657 4949 generic.go:334] "Generic (PLEG): container finished" podID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerID="7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894718 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerDied","Data":"7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894744 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" event={"ID":"25a072c1-c9a6-4a14-9eee-81f3f967503b","Type":"ContainerDied","Data":"37fb91e24d9502fca7001a77a1082aa104b29a70445d3ced18d4a89d50594cce"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894755 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fb91e24d9502fca7001a77a1082aa104b29a70445d3ced18d4a89d50594cce" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.894897 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899090 4949 generic.go:334] "Generic (PLEG): container finished" podID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899181 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899217 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sr2h8" event={"ID":"8827d4ac-468d-4ceb-91c1-fb310a00ddcd","Type":"ContainerDied","Data":"5386e0b6f5f81c0affeb756c00a742c0370df0824ff74eddb71abeead647e2e6"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.899235 4949 scope.go:117] "RemoveContainer" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.903822 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907352 4949 generic.go:334] "Generic (PLEG): container finished" podID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907492 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6ccj" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907511 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.907578 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6ccj" event={"ID":"3da2cb76-6534-4d77-95c0-3d6aaff0de4b","Type":"ContainerDied","Data":"2c82c694719569125cb4b0d6d88dd57bfb1cf02f2ceebb7cc5c8d3146224901c"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936035 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") pod \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936079 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") pod \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936103 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") pod \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936133 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") pod \"2747a148-c24a-4d08-a2ca-19261c14c359\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936159 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") pod \"78cf28ec-e605-49c2-882a-5cb98697605b\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936209 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") pod \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\" (UID: \"3da2cb76-6534-4d77-95c0-3d6aaff0de4b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936264 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") pod \"78cf28ec-e605-49c2-882a-5cb98697605b\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936321 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") pod \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936351 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") pod \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\" (UID: \"8827d4ac-468d-4ceb-91c1-fb310a00ddcd\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936379 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") pod \"2747a148-c24a-4d08-a2ca-19261c14c359\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") pod \"78cf28ec-e605-49c2-882a-5cb98697605b\" (UID: \"78cf28ec-e605-49c2-882a-5cb98697605b\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.936437 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") pod \"2747a148-c24a-4d08-a2ca-19261c14c359\" (UID: \"2747a148-c24a-4d08-a2ca-19261c14c359\") " Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.937857 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities" (OuterVolumeSpecName: "utilities") pod "3da2cb76-6534-4d77-95c0-3d6aaff0de4b" (UID: "3da2cb76-6534-4d77-95c0-3d6aaff0de4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.975889 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities" (OuterVolumeSpecName: "utilities") pod "78cf28ec-e605-49c2-882a-5cb98697605b" (UID: "78cf28ec-e605-49c2-882a-5cb98697605b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.976139 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities" (OuterVolumeSpecName: "utilities") pod "8827d4ac-468d-4ceb-91c1-fb310a00ddcd" (UID: "8827d4ac-468d-4ceb-91c1-fb310a00ddcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.976917 4949 scope.go:117] "RemoveContainer" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977485 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977716 4949 generic.go:334] "Generic (PLEG): container finished" podID="2747a148-c24a-4d08-a2ca-19261c14c359" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977770 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977793 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gt2" event={"ID":"2747a148-c24a-4d08-a2ca-19261c14c359","Type":"ContainerDied","Data":"c64e483ea895830221bcb3fd9971d012c5d2f19d12679860699582d93fd37367"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.977913 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gt2" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.978598 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities" (OuterVolumeSpecName: "utilities") pod "2747a148-c24a-4d08-a2ca-19261c14c359" (UID: "2747a148-c24a-4d08-a2ca-19261c14c359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.984172 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.984325 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk" (OuterVolumeSpecName: "kube-api-access-5kvhk") pod "78cf28ec-e605-49c2-882a-5cb98697605b" (UID: "78cf28ec-e605-49c2-882a-5cb98697605b"). InnerVolumeSpecName "kube-api-access-5kvhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.984947 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc" (OuterVolumeSpecName: "kube-api-access-n9xwc") pod "3da2cb76-6534-4d77-95c0-3d6aaff0de4b" (UID: "3da2cb76-6534-4d77-95c0-3d6aaff0de4b"). InnerVolumeSpecName "kube-api-access-n9xwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.986844 4949 generic.go:334] "Generic (PLEG): container finished" podID="13eef670-55b3-4832-a856-fe2bf8239996" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" exitCode=0 Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.986911 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.986942 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnf74" event={"ID":"13eef670-55b3-4832-a856-fe2bf8239996","Type":"ContainerDied","Data":"7033fb6c503e5baf2b93082863e51771e454c06c8d508e3b8282afa6c65fa61f"} Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.987922 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf" (OuterVolumeSpecName: "kube-api-access-tr8jf") pod "2747a148-c24a-4d08-a2ca-19261c14c359" (UID: "2747a148-c24a-4d08-a2ca-19261c14c359"). InnerVolumeSpecName "kube-api-access-tr8jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:48 crc kubenswrapper[4949]: I0120 14:54:48.987996 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528" (OuterVolumeSpecName: "kube-api-access-8h528") pod "8827d4ac-468d-4ceb-91c1-fb310a00ddcd" (UID: "8827d4ac-468d-4ceb-91c1-fb310a00ddcd"). InnerVolumeSpecName "kube-api-access-8h528". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.001929 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8827d4ac-468d-4ceb-91c1-fb310a00ddcd" (UID: "8827d4ac-468d-4ceb-91c1-fb310a00ddcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.004992 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3da2cb76-6534-4d77-95c0-3d6aaff0de4b" (UID: "3da2cb76-6534-4d77-95c0-3d6aaff0de4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012595 4949 generic.go:334] "Generic (PLEG): container finished" podID="78cf28ec-e605-49c2-882a-5cb98697605b" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" exitCode=0 Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a"} Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012690 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jpqvc" event={"ID":"78cf28ec-e605-49c2-882a-5cb98697605b","Type":"ContainerDied","Data":"cfc38db22b8953300879f0bf00176a88bf6635c28a6beffd49284a3128d08941"} Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.012810 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jpqvc" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.014424 4949 scope.go:117] "RemoveContainer" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038052 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") pod \"25a072c1-c9a6-4a14-9eee-81f3f967503b\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038570 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") pod \"13eef670-55b3-4832-a856-fe2bf8239996\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038609 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") pod \"13eef670-55b3-4832-a856-fe2bf8239996\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038641 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") pod \"25a072c1-c9a6-4a14-9eee-81f3f967503b\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038677 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") pod \"25a072c1-c9a6-4a14-9eee-81f3f967503b\" (UID: \"25a072c1-c9a6-4a14-9eee-81f3f967503b\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038709 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") pod \"13eef670-55b3-4832-a856-fe2bf8239996\" (UID: \"13eef670-55b3-4832-a856-fe2bf8239996\") " Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038909 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038928 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038939 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038948 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr8jf\" (UniqueName: \"kubernetes.io/projected/2747a148-c24a-4d08-a2ca-19261c14c359-kube-api-access-tr8jf\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038957 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9xwc\" (UniqueName: \"kubernetes.io/projected/3da2cb76-6534-4d77-95c0-3d6aaff0de4b-kube-api-access-n9xwc\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038966 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038974 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038983 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h528\" (UniqueName: \"kubernetes.io/projected/8827d4ac-468d-4ceb-91c1-fb310a00ddcd-kube-api-access-8h528\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.038992 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kvhk\" (UniqueName: \"kubernetes.io/projected/78cf28ec-e605-49c2-882a-5cb98697605b-kube-api-access-5kvhk\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.039000 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.040574 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78cf28ec-e605-49c2-882a-5cb98697605b" (UID: "78cf28ec-e605-49c2-882a-5cb98697605b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.040746 4949 scope.go:117] "RemoveContainer" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.040870 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities" (OuterVolumeSpecName: "utilities") pod "13eef670-55b3-4832-a856-fe2bf8239996" (UID: "13eef670-55b3-4832-a856-fe2bf8239996"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.041805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "25a072c1-c9a6-4a14-9eee-81f3f967503b" (UID: "25a072c1-c9a6-4a14-9eee-81f3f967503b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.041880 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a\": container with ID starting with 09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a not found: ID does not exist" containerID="09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.041906 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a"} err="failed to get container status \"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a\": rpc error: code = NotFound desc = could not find container \"09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a\": container with ID starting with 09c759e040e10a0ba1007c639607c4e12b2bec7727b861a1018c18d2df4e630a not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.041929 4949 scope.go:117] "RemoveContainer" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.042946 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6\": container with ID starting with 136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6 not found: ID does not exist" containerID="136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.042998 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6"} err="failed to get container status \"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6\": rpc error: code = NotFound desc = could not find container \"136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6\": container with ID starting with 136a871c06927ee764b1a2f161fffe6895eac43c945524b81657ddbd07b47ba6 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.043032 4949 scope.go:117] "RemoveContainer" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.043312 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8\": container with ID starting with 83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8 not found: ID does not exist" containerID="83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.043337 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8"} err="failed to get container status \"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8\": rpc error: code = NotFound desc = could not find container \"83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8\": container with ID starting with 83f3833f028d8bd55f96c41215cb504656e82a5c030ef9d2ca726bd0ab0d1fc8 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.043354 4949 scope.go:117] "RemoveContainer" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.045753 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9" (OuterVolumeSpecName: "kube-api-access-v44h9") pod "25a072c1-c9a6-4a14-9eee-81f3f967503b" (UID: "25a072c1-c9a6-4a14-9eee-81f3f967503b"). InnerVolumeSpecName "kube-api-access-v44h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.045981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq" (OuterVolumeSpecName: "kube-api-access-cndjq") pod "13eef670-55b3-4832-a856-fe2bf8239996" (UID: "13eef670-55b3-4832-a856-fe2bf8239996"). InnerVolumeSpecName "kube-api-access-cndjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.046833 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "25a072c1-c9a6-4a14-9eee-81f3f967503b" (UID: "25a072c1-c9a6-4a14-9eee-81f3f967503b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.058531 4949 scope.go:117] "RemoveContainer" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.073908 4949 scope.go:117] "RemoveContainer" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091098 4949 scope.go:117] "RemoveContainer" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.091447 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c\": container with ID starting with 469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c not found: ID does not exist" containerID="469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091487 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c"} err="failed to get container status \"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c\": rpc error: code = NotFound desc = could not find container \"469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c\": container with ID starting with 469cf185df22f9e5dd9641f0c3a4f8741606fd482b88351902cbae5db89acb4c not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091532 4949 scope.go:117] "RemoveContainer" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.091788 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6\": container with ID starting with abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6 not found: ID does not exist" containerID="abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091816 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6"} err="failed to get container status \"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6\": rpc error: code = NotFound desc = could not find container \"abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6\": container with ID starting with abca8ce8929384eb7c12840fe80c1f0d6d21844ba9184e4a84c1f6157a3215b6 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.091832 4949 scope.go:117] "RemoveContainer" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.092037 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241\": container with ID starting with 66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241 not found: ID does not exist" containerID="66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.092063 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241"} err="failed to get container status \"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241\": rpc error: code = NotFound desc = could not find container \"66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241\": container with ID starting with 66352474fb1ac7ba653ca37e960e54b0a10b33b9dca2cb6df873403e99a4c241 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.092080 4949 scope.go:117] "RemoveContainer" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.107827 4949 scope.go:117] "RemoveContainer" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.129146 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cnrps"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.129604 4949 scope.go:117] "RemoveContainer" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139873 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78cf28ec-e605-49c2-882a-5cb98697605b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139911 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139929 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cndjq\" (UniqueName: \"kubernetes.io/projected/13eef670-55b3-4832-a856-fe2bf8239996-kube-api-access-cndjq\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139942 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139953 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25a072c1-c9a6-4a14-9eee-81f3f967503b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.139966 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v44h9\" (UniqueName: \"kubernetes.io/projected/25a072c1-c9a6-4a14-9eee-81f3f967503b-kube-api-access-v44h9\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.143643 4949 scope.go:117] "RemoveContainer" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.143897 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192\": container with ID starting with c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192 not found: ID does not exist" containerID="c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.143933 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192"} err="failed to get container status \"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192\": rpc error: code = NotFound desc = could not find container \"c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192\": container with ID starting with c19ee4a03da79a336a312ceb172d76d62b57e9c1243bcf4db752910a7d90c192 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.143953 4949 scope.go:117] "RemoveContainer" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.144130 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549\": container with ID starting with 24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549 not found: ID does not exist" containerID="24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144152 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549"} err="failed to get container status \"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549\": rpc error: code = NotFound desc = could not find container \"24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549\": container with ID starting with 24a7e99aa752d7979645be10f4a894b5dc7bd2a503f87c4ca34024b6adfa3549 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144165 4949 scope.go:117] "RemoveContainer" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.144402 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688\": container with ID starting with 46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688 not found: ID does not exist" containerID="46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144429 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688"} err="failed to get container status \"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688\": rpc error: code = NotFound desc = could not find container \"46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688\": container with ID starting with 46f1f5e5dc139e7f0c1a076875f5fdf8c4a767bb44c54be5bc330bfc92a8a688 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.144442 4949 scope.go:117] "RemoveContainer" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.148573 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2747a148-c24a-4d08-a2ca-19261c14c359" (UID: "2747a148-c24a-4d08-a2ca-19261c14c359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.157615 4949 scope.go:117] "RemoveContainer" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.167064 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13eef670-55b3-4832-a856-fe2bf8239996" (UID: "13eef670-55b3-4832-a856-fe2bf8239996"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.175579 4949 scope.go:117] "RemoveContainer" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.196899 4949 scope.go:117] "RemoveContainer" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.197321 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6\": container with ID starting with 98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6 not found: ID does not exist" containerID="98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197363 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6"} err="failed to get container status \"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6\": rpc error: code = NotFound desc = could not find container \"98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6\": container with ID starting with 98c7a34472e23bd6b0b72cee97db09b6fb9d8f17bf4048d2b0063adf74c5cad6 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197397 4949 scope.go:117] "RemoveContainer" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.197740 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421\": container with ID starting with 3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421 not found: ID does not exist" containerID="3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197781 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421"} err="failed to get container status \"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421\": rpc error: code = NotFound desc = could not find container \"3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421\": container with ID starting with 3253e15f83a88d6f9c51f1049123e2f9708ec42b755c951b27d886503d29a421 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.197814 4949 scope.go:117] "RemoveContainer" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.198190 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166\": container with ID starting with c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166 not found: ID does not exist" containerID="c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.198219 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166"} err="failed to get container status \"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166\": rpc error: code = NotFound desc = could not find container \"c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166\": container with ID starting with c133ed84a5a1ffc6e6a3eb20250cb87b5e33724c652c61aa3041b021bab6e166 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.198235 4949 scope.go:117] "RemoveContainer" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.222933 4949 scope.go:117] "RemoveContainer" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.231837 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.234826 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6ccj"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.240512 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13eef670-55b3-4832-a856-fe2bf8239996-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.240547 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2747a148-c24a-4d08-a2ca-19261c14c359-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.240687 4949 scope.go:117] "RemoveContainer" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.255360 4949 scope.go:117] "RemoveContainer" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.255900 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a\": container with ID starting with 0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a not found: ID does not exist" containerID="0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.255949 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a"} err="failed to get container status \"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a\": rpc error: code = NotFound desc = could not find container \"0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a\": container with ID starting with 0b63429be5e6d60fb7b115cac72458c7f1e1c52f8d790ad6574d0b95a16dcd2a not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.255983 4949 scope.go:117] "RemoveContainer" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.256386 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32\": container with ID starting with c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32 not found: ID does not exist" containerID="c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.256436 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32"} err="failed to get container status \"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32\": rpc error: code = NotFound desc = could not find container \"c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32\": container with ID starting with c66dcf7ced34584cab0dec6e3e5644ff797829a4e7c630e99f5169f9a3839a32 not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.256475 4949 scope.go:117] "RemoveContainer" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" Jan 20 14:54:49 crc kubenswrapper[4949]: E0120 14:54:49.256857 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be\": container with ID starting with 758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be not found: ID does not exist" containerID="758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.256907 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be"} err="failed to get container status \"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be\": rpc error: code = NotFound desc = could not find container \"758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be\": container with ID starting with 758847d74c483c5646861d02ecf783b4a6adaaed38679ef2e806271641fce0be not found: ID does not exist" Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.303836 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.306477 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7gt2"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.337630 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:54:49 crc kubenswrapper[4949]: I0120 14:54:49.346760 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jpqvc"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.021795 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sr2h8" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.023939 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" event={"ID":"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74","Type":"ContainerStarted","Data":"4f19f3364c511a489b321f70c056d1c9670c2fbe97058d4c7fc3369964b75a06"} Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.023973 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" event={"ID":"e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74","Type":"ContainerStarted","Data":"ce8c1e2f43751fe26551b9f28e71733bcc9b16a2c90fa80d8883135dae67069f"} Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.024236 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.027975 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntmdh" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.027998 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnf74" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.032488 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.056332 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cnrps" podStartSLOduration=2.056312802 podStartE2EDuration="2.056312802s" podCreationTimestamp="2026-01-20 14:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:54:50.055122023 +0000 UTC m=+285.864952891" watchObservedRunningTime="2026-01-20 14:54:50.056312802 +0000 UTC m=+285.866143660" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.092064 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.097566 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sr2h8"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.126564 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.138339 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntmdh"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.149146 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.153564 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qnf74"] Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.796887 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13eef670-55b3-4832-a856-fe2bf8239996" path="/var/lib/kubelet/pods/13eef670-55b3-4832-a856-fe2bf8239996/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.798733 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" path="/var/lib/kubelet/pods/25a072c1-c9a6-4a14-9eee-81f3f967503b/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.799432 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" path="/var/lib/kubelet/pods/2747a148-c24a-4d08-a2ca-19261c14c359/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.800765 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" path="/var/lib/kubelet/pods/3da2cb76-6534-4d77-95c0-3d6aaff0de4b/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.801640 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" path="/var/lib/kubelet/pods/78cf28ec-e605-49c2-882a-5cb98697605b/volumes" Jan 20 14:54:50 crc kubenswrapper[4949]: I0120 14:54:50.803053 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" path="/var/lib/kubelet/pods/8827d4ac-468d-4ceb-91c1-fb310a00ddcd/volumes" Jan 20 14:55:04 crc kubenswrapper[4949]: I0120 14:55:04.595879 4949 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.565584 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.566644 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" containerID="cri-o://12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" gracePeriod=30 Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.654972 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.655191 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" containerID="cri-o://c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" gracePeriod=30 Jan 20 14:55:12 crc kubenswrapper[4949]: I0120 14:55:12.986642 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.029459 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.033898 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.034010 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.034047 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.038053 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.038682 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.039585 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config" (OuterVolumeSpecName: "config") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.039871 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") pod \"6278caf6-b4d9-414c-99ed-686de2b23a80\" (UID: \"6278caf6-b4d9-414c-99ed-686de2b23a80\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.040623 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.040649 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.041001 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca" (OuterVolumeSpecName: "client-ca") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.052993 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n" (OuterVolumeSpecName: "kube-api-access-5x48n") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "kube-api-access-5x48n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.056320 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6278caf6-b4d9-414c-99ed-686de2b23a80" (UID: "6278caf6-b4d9-414c-99ed-686de2b23a80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141599 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141664 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141723 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.141775 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") pod \"086b7727-a8b6-4416-a46e-60e4474e79e2\" (UID: \"086b7727-a8b6-4416-a46e-60e4474e79e2\") " Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142029 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6278caf6-b4d9-414c-99ed-686de2b23a80-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142045 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6278caf6-b4d9-414c-99ed-686de2b23a80-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142057 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x48n\" (UniqueName: \"kubernetes.io/projected/6278caf6-b4d9-414c-99ed-686de2b23a80-kube-api-access-5x48n\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142389 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca" (OuterVolumeSpecName: "client-ca") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.142451 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config" (OuterVolumeSpecName: "config") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.144619 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.144688 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l" (OuterVolumeSpecName: "kube-api-access-j2v6l") pod "086b7727-a8b6-4416-a46e-60e4474e79e2" (UID: "086b7727-a8b6-4416-a46e-60e4474e79e2"). InnerVolumeSpecName "kube-api-access-j2v6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.173892 4949 generic.go:334] "Generic (PLEG): container finished" podID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" exitCode=0 Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.173951 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.173974 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerDied","Data":"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.174002 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv" event={"ID":"086b7727-a8b6-4416-a46e-60e4474e79e2","Type":"ContainerDied","Data":"6cdd7178026b2587db50c95fe7c40688b8e05cd993d070aa0db4f3a3e9c38e1f"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.174018 4949 scope.go:117] "RemoveContainer" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178216 4949 generic.go:334] "Generic (PLEG): container finished" podID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" exitCode=0 Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178418 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerDied","Data":"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178446 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" event={"ID":"6278caf6-b4d9-414c-99ed-686de2b23a80","Type":"ContainerDied","Data":"7907a15de121329f36757762b3c977e945ddc8acc2d24575b443ad7c91ad2f70"} Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.178573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fz5x4" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.209889 4949 scope.go:117] "RemoveContainer" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" Jan 20 14:55:13 crc kubenswrapper[4949]: E0120 14:55:13.210977 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc\": container with ID starting with c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc not found: ID does not exist" containerID="c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.211018 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc"} err="failed to get container status \"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc\": rpc error: code = NotFound desc = could not find container \"c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc\": container with ID starting with c6f08e13a5f1375ee7274098191c6133d9e083dfbe3b903f3bceacd158ef19bc not found: ID does not exist" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.211045 4949 scope.go:117] "RemoveContainer" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.216558 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.219548 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zc5vv"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.227422 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.230305 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fz5x4"] Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.236216 4949 scope.go:117] "RemoveContainer" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" Jan 20 14:55:13 crc kubenswrapper[4949]: E0120 14:55:13.236724 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45\": container with ID starting with 12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45 not found: ID does not exist" containerID="12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.236775 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45"} err="failed to get container status \"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45\": rpc error: code = NotFound desc = could not find container \"12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45\": container with ID starting with 12dccb32a64aa2bd1ee35aff8b798c598d023dd682581be13f2012338b80ae45 not found: ID does not exist" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243481 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243542 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2v6l\" (UniqueName: \"kubernetes.io/projected/086b7727-a8b6-4416-a46e-60e4474e79e2-kube-api-access-j2v6l\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243558 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/086b7727-a8b6-4416-a46e-60e4474e79e2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:13 crc kubenswrapper[4949]: I0120 14:55:13.243571 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/086b7727-a8b6-4416-a46e-60e4474e79e2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.231313 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232029 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232053 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232066 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232077 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232097 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232108 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232121 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232133 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232151 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232162 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232179 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232189 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232202 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232215 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-utilities" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232230 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232242 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232261 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232272 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232290 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232301 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232317 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232328 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232344 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232355 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232368 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232379 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232401 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232412 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232425 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232436 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="extract-content" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232449 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232460 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232476 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232487 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.232503 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232537 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232681 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" containerName="controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232705 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2747a148-c24a-4d08-a2ca-19261c14c359" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232724 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8827d4ac-468d-4ceb-91c1-fb310a00ddcd" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232740 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da2cb76-6534-4d77-95c0-3d6aaff0de4b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232756 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cf28ec-e605-49c2-882a-5cb98697605b" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232767 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="13eef670-55b3-4832-a856-fe2bf8239996" containerName="registry-server" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232784 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a072c1-c9a6-4a14-9eee-81f3f967503b" containerName="marketplace-operator" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.232797 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" containerName="route-controller-manager" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.233267 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.236957 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237162 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237271 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237606 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.237911 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.238773 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.239706 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.243273 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.244121 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.244423 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.246458 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.246947 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247133 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247177 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247336 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.247355 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.254501 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356437 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356493 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356533 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356624 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356707 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356757 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356776 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.356806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.423247 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.423652 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-76kqf proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" podUID="8f231268-8959-425f-94a1-39d0ec215e63" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.440893 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:14 crc kubenswrapper[4949]: E0120 14:55:14.441306 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-ms772 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" podUID="284b359a-a00f-4f88-bb5a-cd477997cfe2" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.457969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458117 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458155 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.458416 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459192 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459270 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.459887 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.473340 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.473354 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.481448 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"controller-manager-8f6bd5c7f-7gvsm\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.481666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"route-controller-manager-5cc9f64745-pplkm\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.795140 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086b7727-a8b6-4416-a46e-60e4474e79e2" path="/var/lib/kubelet/pods/086b7727-a8b6-4416-a46e-60e4474e79e2/volumes" Jan 20 14:55:14 crc kubenswrapper[4949]: I0120 14:55:14.796356 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6278caf6-b4d9-414c-99ed-686de2b23a80" path="/var/lib/kubelet/pods/6278caf6-b4d9-414c-99ed-686de2b23a80/volumes" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.202163 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.203006 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.219902 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.224671 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.266980 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267037 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267135 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267180 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267230 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267263 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267298 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") pod \"8f231268-8959-425f-94a1-39d0ec215e63\" (UID: \"8f231268-8959-425f-94a1-39d0ec215e63\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.267334 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") pod \"284b359a-a00f-4f88-bb5a-cd477997cfe2\" (UID: \"284b359a-a00f-4f88-bb5a-cd477997cfe2\") " Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269008 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269502 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config" (OuterVolumeSpecName: "config") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269581 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config" (OuterVolumeSpecName: "config") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.269903 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca" (OuterVolumeSpecName: "client-ca") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.271173 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772" (OuterVolumeSpecName: "kube-api-access-ms772") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "kube-api-access-ms772". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.273884 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "284b359a-a00f-4f88-bb5a-cd477997cfe2" (UID: "284b359a-a00f-4f88-bb5a-cd477997cfe2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.275304 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf" (OuterVolumeSpecName: "kube-api-access-76kqf") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "kube-api-access-76kqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.275637 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f231268-8959-425f-94a1-39d0ec215e63" (UID: "8f231268-8959-425f-94a1-39d0ec215e63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368801 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368846 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368860 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76kqf\" (UniqueName: \"kubernetes.io/projected/8f231268-8959-425f-94a1-39d0ec215e63-kube-api-access-76kqf\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368875 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b359a-a00f-4f88-bb5a-cd477997cfe2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368888 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f231268-8959-425f-94a1-39d0ec215e63-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368899 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368910 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b359a-a00f-4f88-bb5a-cd477997cfe2-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368921 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f231268-8959-425f-94a1-39d0ec215e63-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:15 crc kubenswrapper[4949]: I0120 14:55:15.368932 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms772\" (UniqueName: \"kubernetes.io/projected/284b359a-a00f-4f88-bb5a-cd477997cfe2-kube-api-access-ms772\") on node \"crc\" DevicePath \"\"" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.205922 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.205996 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.247385 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.252159 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-pplkm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.260931 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.262214 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.264761 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265430 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265615 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265472 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.265549 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.267427 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.268038 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.282294 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.285841 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-7gvsm"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.379871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.379966 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.379991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.380086 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.482240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.483499 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.483609 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.489510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.501880 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"route-controller-manager-d6ddf6c78-xchk9\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.581942 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.765018 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.795120 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284b359a-a00f-4f88-bb5a-cd477997cfe2" path="/var/lib/kubelet/pods/284b359a-a00f-4f88-bb5a-cd477997cfe2/volumes" Jan 20 14:55:16 crc kubenswrapper[4949]: I0120 14:55:16.795485 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f231268-8959-425f-94a1-39d0ec215e63" path="/var/lib/kubelet/pods/8f231268-8959-425f-94a1-39d0ec215e63/volumes" Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.211874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerStarted","Data":"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936"} Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.212210 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerStarted","Data":"313ea577e8b3ff4c88a60fc56d9fc090a87bd2fe0ebac3dc4712d691702faa51"} Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.212230 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.225595 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" podStartSLOduration=3.225577369 podStartE2EDuration="3.225577369s" podCreationTimestamp="2026-01-20 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:55:17.224909178 +0000 UTC m=+313.034740056" watchObservedRunningTime="2026-01-20 14:55:17.225577369 +0000 UTC m=+313.035408237" Jan 20 14:55:17 crc kubenswrapper[4949]: I0120 14:55:17.687076 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.234785 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.236843 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.240454 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.241464 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.241841 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.242118 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.243511 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.245684 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.252985 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.253267 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.322545 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.322865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.322980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.323113 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.323192 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424349 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.424371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.425377 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.425655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.426732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.431693 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.443590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"controller-manager-754576dcc6-r48kb\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.560259 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:19 crc kubenswrapper[4949]: I0120 14:55:19.742975 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.230052 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerStarted","Data":"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea"} Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.230572 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.230611 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerStarted","Data":"05e85e4adc482ebfc0757de2dd7679466c3f64d1e6473ddc8666c2167f6cdf09"} Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.235092 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:55:20 crc kubenswrapper[4949]: I0120 14:55:20.251728 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" podStartSLOduration=6.251707725 podStartE2EDuration="6.251707725s" podCreationTimestamp="2026-01-20 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:55:20.247703191 +0000 UTC m=+316.057534139" watchObservedRunningTime="2026-01-20 14:55:20.251707725 +0000 UTC m=+316.061538593" Jan 20 14:55:57 crc kubenswrapper[4949]: I0120 14:55:57.151848 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:55:57 crc kubenswrapper[4949]: I0120 14:55:57.152449 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:56:06 crc kubenswrapper[4949]: I0120 14:56:06.979371 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7lmx"] Jan 20 14:56:06 crc kubenswrapper[4949]: I0120 14:56:06.981188 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:06 crc kubenswrapper[4949]: I0120 14:56:06.994280 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7lmx"] Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154551 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-trusted-ca\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84ec1440-abb3-49f3-ae31-abbb980aad98-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154609 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6btm\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-kube-api-access-z6btm\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154642 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84ec1440-abb3-49f3-ae31-abbb980aad98-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154681 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-certificates\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-bound-sa-token\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.154746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-tls\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.180916 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-tls\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256538 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-trusted-ca\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256573 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84ec1440-abb3-49f3-ae31-abbb980aad98-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256605 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6btm\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-kube-api-access-z6btm\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84ec1440-abb3-49f3-ae31-abbb980aad98-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-certificates\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.256714 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-bound-sa-token\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.257661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/84ec1440-abb3-49f3-ae31-abbb980aad98-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.258251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-trusted-ca\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.258414 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-certificates\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.262911 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-registry-tls\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.269244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/84ec1440-abb3-49f3-ae31-abbb980aad98-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.275388 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-bound-sa-token\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.275942 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6btm\" (UniqueName: \"kubernetes.io/projected/84ec1440-abb3-49f3-ae31-abbb980aad98-kube-api-access-z6btm\") pod \"image-registry-66df7c8f76-k7lmx\" (UID: \"84ec1440-abb3-49f3-ae31-abbb980aad98\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.297908 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:07 crc kubenswrapper[4949]: I0120 14:56:07.523330 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7lmx"] Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.530753 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" event={"ID":"84ec1440-abb3-49f3-ae31-abbb980aad98","Type":"ContainerStarted","Data":"ee8482ad744df465f8d01bb554a7859977858e6cdc32e54ce822bbb467347510"} Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.531110 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" event={"ID":"84ec1440-abb3-49f3-ae31-abbb980aad98","Type":"ContainerStarted","Data":"ca42e74ebd962a599a4cea7e55d4f726348461d2bc7400b1d3f8a6890e5e25c7"} Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.532058 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:08 crc kubenswrapper[4949]: I0120 14:56:08.552954 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" podStartSLOduration=2.552935817 podStartE2EDuration="2.552935817s" podCreationTimestamp="2026-01-20 14:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:56:08.549898686 +0000 UTC m=+364.359729544" watchObservedRunningTime="2026-01-20 14:56:08.552935817 +0000 UTC m=+364.362766675" Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.588569 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.590316 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" containerID="cri-o://fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" gracePeriod=30 Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.598419 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.598688 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" containerID="cri-o://dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" gracePeriod=30 Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.990560 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:56:12 crc kubenswrapper[4949]: I0120 14:56:12.995806 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130197 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130243 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130278 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130369 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130412 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") pod \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\" (UID: \"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130433 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.130456 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") pod \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\" (UID: \"210dd9ac-f90d-4fa4-aeca-016173d6bf53\") " Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131095 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config" (OuterVolumeSpecName: "config") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131226 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131320 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config" (OuterVolumeSpecName: "config") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131545 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131569 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131586 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-config\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.131989 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca" (OuterVolumeSpecName: "client-ca") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.132029 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.136792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k" (OuterVolumeSpecName: "kube-api-access-89c8k") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "kube-api-access-89c8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.137034 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210dd9ac-f90d-4fa4-aeca-016173d6bf53" (UID: "210dd9ac-f90d-4fa4-aeca-016173d6bf53"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.138191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9" (OuterVolumeSpecName: "kube-api-access-fnrn9") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "kube-api-access-fnrn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.144668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" (UID: "df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232645 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232685 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89c8k\" (UniqueName: \"kubernetes.io/projected/210dd9ac-f90d-4fa4-aeca-016173d6bf53-kube-api-access-89c8k\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232695 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210dd9ac-f90d-4fa4-aeca-016173d6bf53-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232706 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnrn9\" (UniqueName: \"kubernetes.io/projected/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-kube-api-access-fnrn9\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232715 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.232723 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/210dd9ac-f90d-4fa4-aeca-016173d6bf53-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565742 4949 generic.go:334] "Generic (PLEG): container finished" podID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" exitCode=0 Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565866 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerDied","Data":"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565900 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" event={"ID":"df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7","Type":"ContainerDied","Data":"313ea577e8b3ff4c88a60fc56d9fc090a87bd2fe0ebac3dc4712d691702faa51"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.565929 4949 scope.go:117] "RemoveContainer" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.566236 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569201 4949 generic.go:334] "Generic (PLEG): container finished" podID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" exitCode=0 Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569241 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerDied","Data":"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569270 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" event={"ID":"210dd9ac-f90d-4fa4-aeca-016173d6bf53","Type":"ContainerDied","Data":"05e85e4adc482ebfc0757de2dd7679466c3f64d1e6473ddc8666c2167f6cdf09"} Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.569405 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754576dcc6-r48kb" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.588013 4949 scope.go:117] "RemoveContainer" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" Jan 20 14:56:13 crc kubenswrapper[4949]: E0120 14:56:13.588598 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936\": container with ID starting with dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936 not found: ID does not exist" containerID="dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.588709 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936"} err="failed to get container status \"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936\": rpc error: code = NotFound desc = could not find container \"dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936\": container with ID starting with dab2264115ce46cdbbcf852898b994122990faba529d08cff1eaeb316fe39936 not found: ID does not exist" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.588814 4949 scope.go:117] "RemoveContainer" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.607489 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.607671 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d6ddf6c78-xchk9"] Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.610693 4949 scope.go:117] "RemoveContainer" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" Jan 20 14:56:13 crc kubenswrapper[4949]: E0120 14:56:13.611176 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea\": container with ID starting with fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea not found: ID does not exist" containerID="fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.611269 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea"} err="failed to get container status \"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea\": rpc error: code = NotFound desc = could not find container \"fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea\": container with ID starting with fccf57073023fb6e21dda268ac374826476d3a35f0e3bca02b597a2a7c498cea not found: ID does not exist" Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.615897 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:56:13 crc kubenswrapper[4949]: I0120 14:56:13.620306 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-754576dcc6-r48kb"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.275651 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb"] Jan 20 14:56:14 crc kubenswrapper[4949]: E0120 14:56:14.276024 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276064 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: E0120 14:56:14.276087 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276104 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276304 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" containerName="route-controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.276339 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" containerName="controller-manager" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.277106 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281131 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281416 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281574 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281694 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.281221 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.282018 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.282127 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.282914 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.284510 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.284731 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285073 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285184 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285391 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.285493 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.291247 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.292194 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.303002 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb"] Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448045 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-client-ca\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448103 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448122 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxg4l\" (UniqueName: \"kubernetes.io/projected/46c5d356-4598-454d-9f32-304c9d1a003f-kube-api-access-sxg4l\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448151 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-client-ca\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448230 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgdsc\" (UniqueName: \"kubernetes.io/projected/f6e86fc1-f82b-4736-af73-a322d2324a73-kube-api-access-mgdsc\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448247 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c5d356-4598-454d-9f32-304c9d1a003f-serving-cert\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-config\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448313 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-config\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.448331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e86fc1-f82b-4736-af73-a322d2324a73-serving-cert\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-config\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-config\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e86fc1-f82b-4736-af73-a322d2324a73-serving-cert\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549851 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-client-ca\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549898 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxg4l\" (UniqueName: \"kubernetes.io/projected/46c5d356-4598-454d-9f32-304c9d1a003f-kube-api-access-sxg4l\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-client-ca\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549965 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgdsc\" (UniqueName: \"kubernetes.io/projected/f6e86fc1-f82b-4736-af73-a322d2324a73-kube-api-access-mgdsc\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.549985 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c5d356-4598-454d-9f32-304c9d1a003f-serving-cert\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.551203 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-client-ca\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.551386 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-client-ca\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.551465 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-config\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.552302 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c5d356-4598-454d-9f32-304c9d1a003f-config\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.554148 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c5d356-4598-454d-9f32-304c9d1a003f-serving-cert\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.555882 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e86fc1-f82b-4736-af73-a322d2324a73-proxy-ca-bundles\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.563310 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e86fc1-f82b-4736-af73-a322d2324a73-serving-cert\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.565847 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgdsc\" (UniqueName: \"kubernetes.io/projected/f6e86fc1-f82b-4736-af73-a322d2324a73-kube-api-access-mgdsc\") pod \"controller-manager-8f6bd5c7f-chqcm\" (UID: \"f6e86fc1-f82b-4736-af73-a322d2324a73\") " pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.572325 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxg4l\" (UniqueName: \"kubernetes.io/projected/46c5d356-4598-454d-9f32-304c9d1a003f-kube-api-access-sxg4l\") pod \"route-controller-manager-5cc9f64745-nwgxb\" (UID: \"46c5d356-4598-454d-9f32-304c9d1a003f\") " pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.596439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.608540 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.819157 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210dd9ac-f90d-4fa4-aeca-016173d6bf53" path="/var/lib/kubelet/pods/210dd9ac-f90d-4fa4-aeca-016173d6bf53/volumes" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.822044 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7" path="/var/lib/kubelet/pods/df4fc6c4-a8d6-4ea2-b329-c6ecc25a22b7/volumes" Jan 20 14:56:14 crc kubenswrapper[4949]: I0120 14:56:14.841901 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb"] Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.004258 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm"] Jan 20 14:56:15 crc kubenswrapper[4949]: W0120 14:56:15.013824 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e86fc1_f82b_4736_af73_a322d2324a73.slice/crio-97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477 WatchSource:0}: Error finding container 97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477: Status 404 returned error can't find the container with id 97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477 Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.582641 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" event={"ID":"46c5d356-4598-454d-9f32-304c9d1a003f","Type":"ContainerStarted","Data":"68c833d57e2c2677ee7859ae5870b6e92181c9a751d178e0ef6c641c60dd16b3"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.582987 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.583005 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" event={"ID":"46c5d356-4598-454d-9f32-304c9d1a003f","Type":"ContainerStarted","Data":"722267d30647c087ea63666ea27eb50b2039f24d55fbec97e3472ddcb3ce46e9"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.586335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" event={"ID":"f6e86fc1-f82b-4736-af73-a322d2324a73","Type":"ContainerStarted","Data":"32ef3ca09555aaac38457eb5e80053aeeb9798c783ded45ac91108a98f6d9b21"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.586381 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" event={"ID":"f6e86fc1-f82b-4736-af73-a322d2324a73","Type":"ContainerStarted","Data":"97ef7f3097fe40eb944f3a3e554ede457cefca212abc84277e4c8457d36b3477"} Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.586671 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.590669 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.592643 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.629652 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cc9f64745-nwgxb" podStartSLOduration=3.629629261 podStartE2EDuration="3.629629261s" podCreationTimestamp="2026-01-20 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:56:15.626891358 +0000 UTC m=+371.436722226" watchObservedRunningTime="2026-01-20 14:56:15.629629261 +0000 UTC m=+371.439460119" Jan 20 14:56:15 crc kubenswrapper[4949]: I0120 14:56:15.707898 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8f6bd5c7f-chqcm" podStartSLOduration=3.707881048 podStartE2EDuration="3.707881048s" podCreationTimestamp="2026-01-20 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 14:56:15.705714239 +0000 UTC m=+371.515545097" watchObservedRunningTime="2026-01-20 14:56:15.707881048 +0000 UTC m=+371.517711896" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.779685 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xr695"] Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.783439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.787766 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.805570 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xr695"] Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.956967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw974\" (UniqueName: \"kubernetes.io/projected/090c2072-966d-4848-82fc-c9aecee3d6c8-kube-api-access-fw974\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.957077 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-catalog-content\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.957489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-utilities\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.973304 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kmnv"] Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.974918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.980592 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 14:56:21 crc kubenswrapper[4949]: I0120 14:56:21.990191 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kmnv"] Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-utilities\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059317 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2x7w\" (UniqueName: \"kubernetes.io/projected/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-kube-api-access-s2x7w\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059541 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-utilities\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw974\" (UniqueName: \"kubernetes.io/projected/090c2072-966d-4848-82fc-c9aecee3d6c8-kube-api-access-fw974\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059724 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-utilities\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.059981 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-catalog-content\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.060187 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-catalog-content\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.060725 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c2072-966d-4848-82fc-c9aecee3d6c8-catalog-content\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.091114 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw974\" (UniqueName: \"kubernetes.io/projected/090c2072-966d-4848-82fc-c9aecee3d6c8-kube-api-access-fw974\") pod \"community-operators-xr695\" (UID: \"090c2072-966d-4848-82fc-c9aecee3d6c8\") " pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.110093 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.161678 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-catalog-content\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2x7w\" (UniqueName: \"kubernetes.io/projected/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-kube-api-access-s2x7w\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162082 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-utilities\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162644 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-catalog-content\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.162661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-utilities\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.180055 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2x7w\" (UniqueName: \"kubernetes.io/projected/a55010bf-14fe-4c92-8fe4-d2864bf74ad1-kube-api-access-s2x7w\") pod \"certified-operators-8kmnv\" (UID: \"a55010bf-14fe-4c92-8fe4-d2864bf74ad1\") " pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.336890 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.528885 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xr695"] Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.629759 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerStarted","Data":"ad832ffb9b4f52428f24eb998ee2602b9e5846006b4f7a3c41f9d611764ccd57"} Jan 20 14:56:22 crc kubenswrapper[4949]: I0120 14:56:22.759386 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kmnv"] Jan 20 14:56:22 crc kubenswrapper[4949]: W0120 14:56:22.770766 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55010bf_14fe_4c92_8fe4_d2864bf74ad1.slice/crio-3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16 WatchSource:0}: Error finding container 3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16: Status 404 returned error can't find the container with id 3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16 Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.637896 4949 generic.go:334] "Generic (PLEG): container finished" podID="090c2072-966d-4848-82fc-c9aecee3d6c8" containerID="7b0867ecabd0014be864d1e6f7bf0e03195528db1f305c62167566a01902fbf6" exitCode=0 Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.637978 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerDied","Data":"7b0867ecabd0014be864d1e6f7bf0e03195528db1f305c62167566a01902fbf6"} Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.639703 4949 generic.go:334] "Generic (PLEG): container finished" podID="a55010bf-14fe-4c92-8fe4-d2864bf74ad1" containerID="c5f9596722e2717ba4bbc6e2743e3b28ff8bfe70323955ba96468a2919203377" exitCode=0 Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.639822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerDied","Data":"c5f9596722e2717ba4bbc6e2743e3b28ff8bfe70323955ba96468a2919203377"} Jan 20 14:56:23 crc kubenswrapper[4949]: I0120 14:56:23.639848 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerStarted","Data":"3663a3f11225c6bf308bab7c5f35330e431ac0bbb63bbeb05bfab798f2caaf16"} Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.166970 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-97g5q"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.168615 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.170853 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.176959 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97g5q"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.315117 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-utilities\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.315185 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvw4q\" (UniqueName: \"kubernetes.io/projected/3f68902a-0bee-45a6-96c4-b4a80feaba0b-kube-api-access-bvw4q\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.315241 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-catalog-content\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.360852 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cmxfz"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.362227 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.364510 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.369905 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmxfz"] Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.417806 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-utilities\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.417913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvw4q\" (UniqueName: \"kubernetes.io/projected/3f68902a-0bee-45a6-96c4-b4a80feaba0b-kube-api-access-bvw4q\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.417948 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-catalog-content\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.419249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-catalog-content\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.419281 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f68902a-0bee-45a6-96c4-b4a80feaba0b-utilities\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.448423 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvw4q\" (UniqueName: \"kubernetes.io/projected/3f68902a-0bee-45a6-96c4-b4a80feaba0b-kube-api-access-bvw4q\") pod \"redhat-marketplace-97g5q\" (UID: \"3f68902a-0bee-45a6-96c4-b4a80feaba0b\") " pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.519508 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-utilities\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.519643 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccd9\" (UniqueName: \"kubernetes.io/projected/983905b2-cefb-487e-887f-630d669af9ec-kube-api-access-jccd9\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.519698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-catalog-content\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.532388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.620705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccd9\" (UniqueName: \"kubernetes.io/projected/983905b2-cefb-487e-887f-630d669af9ec-kube-api-access-jccd9\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.620782 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-catalog-content\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.620856 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-utilities\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.621419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-utilities\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.621915 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/983905b2-cefb-487e-887f-630d669af9ec-catalog-content\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.650452 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccd9\" (UniqueName: \"kubernetes.io/projected/983905b2-cefb-487e-887f-630d669af9ec-kube-api-access-jccd9\") pod \"redhat-operators-cmxfz\" (UID: \"983905b2-cefb-487e-887f-630d669af9ec\") " pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.660917 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerStarted","Data":"c60562e413ae3c6bf4dadc6a21fe6e75f11370a630aa11292bed0315d60cc57d"} Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.677623 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:24 crc kubenswrapper[4949]: I0120 14:56:24.977496 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97g5q"] Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.079437 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmxfz"] Jan 20 14:56:25 crc kubenswrapper[4949]: W0120 14:56:25.080766 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod983905b2_cefb_487e_887f_630d669af9ec.slice/crio-f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46 WatchSource:0}: Error finding container f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46: Status 404 returned error can't find the container with id f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.669668 4949 generic.go:334] "Generic (PLEG): container finished" podID="a55010bf-14fe-4c92-8fe4-d2864bf74ad1" containerID="6f94d99f7ae815f99c7c8c843c9517e97331040f874fb4ab2e873f25aa66d3c5" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.669788 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerDied","Data":"6f94d99f7ae815f99c7c8c843c9517e97331040f874fb4ab2e873f25aa66d3c5"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.674010 4949 generic.go:334] "Generic (PLEG): container finished" podID="983905b2-cefb-487e-887f-630d669af9ec" containerID="3268808c5d9377a01bda76e9067683e127450dd28bb9c6135711e6825f997d39" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.674099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerDied","Data":"3268808c5d9377a01bda76e9067683e127450dd28bb9c6135711e6825f997d39"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.674130 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerStarted","Data":"f58a583d49c7a9d32316a8097c8e7eb7befcec0503d62a153cab7fa6e4d44b46"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.679856 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f68902a-0bee-45a6-96c4-b4a80feaba0b" containerID="776e57ffb05d63b3a8f310d0ea4274d0c2770b33e77f95ca40e90a3aafeb13a0" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.680080 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerDied","Data":"776e57ffb05d63b3a8f310d0ea4274d0c2770b33e77f95ca40e90a3aafeb13a0"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.680128 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerStarted","Data":"4df821b1e618830109653783a7423beeaddee5e2149175a44fa61f71dc3770c7"} Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.682556 4949 generic.go:334] "Generic (PLEG): container finished" podID="090c2072-966d-4848-82fc-c9aecee3d6c8" containerID="c60562e413ae3c6bf4dadc6a21fe6e75f11370a630aa11292bed0315d60cc57d" exitCode=0 Jan 20 14:56:25 crc kubenswrapper[4949]: I0120 14:56:25.682600 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerDied","Data":"c60562e413ae3c6bf4dadc6a21fe6e75f11370a630aa11292bed0315d60cc57d"} Jan 20 14:56:26 crc kubenswrapper[4949]: I0120 14:56:26.690405 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr695" event={"ID":"090c2072-966d-4848-82fc-c9aecee3d6c8","Type":"ContainerStarted","Data":"a03cb826bb1586f2c70f06f895e417715734e69894da7cda0b67e4ad55710489"} Jan 20 14:56:26 crc kubenswrapper[4949]: I0120 14:56:26.692370 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerStarted","Data":"3454e921dfad33098cfded3c09253ff82449f494846d14a6fc38ed3c8085494d"} Jan 20 14:56:26 crc kubenswrapper[4949]: I0120 14:56:26.713353 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xr695" podStartSLOduration=3.233555087 podStartE2EDuration="5.713333355s" podCreationTimestamp="2026-01-20 14:56:21 +0000 UTC" firstStartedPulling="2026-01-20 14:56:23.640240564 +0000 UTC m=+379.450071422" lastFinishedPulling="2026-01-20 14:56:26.120018782 +0000 UTC m=+381.929849690" observedRunningTime="2026-01-20 14:56:26.711836096 +0000 UTC m=+382.521666954" watchObservedRunningTime="2026-01-20 14:56:26.713333355 +0000 UTC m=+382.523164213" Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.152167 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.152233 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.305660 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k7lmx" Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.357704 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.705982 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kmnv" event={"ID":"a55010bf-14fe-4c92-8fe4-d2864bf74ad1","Type":"ContainerStarted","Data":"4b45631f8d76233b71661d12cb46b94e4103b3108fa69f26be3698b342e57fd5"} Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.707966 4949 generic.go:334] "Generic (PLEG): container finished" podID="983905b2-cefb-487e-887f-630d669af9ec" containerID="3454e921dfad33098cfded3c09253ff82449f494846d14a6fc38ed3c8085494d" exitCode=0 Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.708020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerDied","Data":"3454e921dfad33098cfded3c09253ff82449f494846d14a6fc38ed3c8085494d"} Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.710077 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f68902a-0bee-45a6-96c4-b4a80feaba0b" containerID="7e212077e453b38deefbdf51b0127bbcfdbccff7a12198434ed1240e59fa9511" exitCode=0 Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.710147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerDied","Data":"7e212077e453b38deefbdf51b0127bbcfdbccff7a12198434ed1240e59fa9511"} Jan 20 14:56:27 crc kubenswrapper[4949]: I0120 14:56:27.731096 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kmnv" podStartSLOduration=3.51399369 podStartE2EDuration="6.731077341s" podCreationTimestamp="2026-01-20 14:56:21 +0000 UTC" firstStartedPulling="2026-01-20 14:56:23.641803766 +0000 UTC m=+379.451634634" lastFinishedPulling="2026-01-20 14:56:26.858887427 +0000 UTC m=+382.668718285" observedRunningTime="2026-01-20 14:56:27.725182267 +0000 UTC m=+383.535013125" watchObservedRunningTime="2026-01-20 14:56:27.731077341 +0000 UTC m=+383.540908209" Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.718255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmxfz" event={"ID":"983905b2-cefb-487e-887f-630d669af9ec","Type":"ContainerStarted","Data":"029d90f91b5bd943a69458376c2228d768a98a74a8fa0347e7eacecc3c6d0d9e"} Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.720340 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97g5q" event={"ID":"3f68902a-0bee-45a6-96c4-b4a80feaba0b","Type":"ContainerStarted","Data":"90052cc8e2c2be637a1705466ef64c1f7add3f5cb75ee4d630b16c8860a1c3b3"} Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.744111 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cmxfz" podStartSLOduration=2.211934063 podStartE2EDuration="4.74408214s" podCreationTimestamp="2026-01-20 14:56:24 +0000 UTC" firstStartedPulling="2026-01-20 14:56:25.674772413 +0000 UTC m=+381.484603271" lastFinishedPulling="2026-01-20 14:56:28.20692048 +0000 UTC m=+384.016751348" observedRunningTime="2026-01-20 14:56:28.737177552 +0000 UTC m=+384.547008410" watchObservedRunningTime="2026-01-20 14:56:28.74408214 +0000 UTC m=+384.553913038" Jan 20 14:56:28 crc kubenswrapper[4949]: I0120 14:56:28.754783 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-97g5q" podStartSLOduration=1.992103323 podStartE2EDuration="4.754765013s" podCreationTimestamp="2026-01-20 14:56:24 +0000 UTC" firstStartedPulling="2026-01-20 14:56:25.681099242 +0000 UTC m=+381.490930100" lastFinishedPulling="2026-01-20 14:56:28.443760932 +0000 UTC m=+384.253591790" observedRunningTime="2026-01-20 14:56:28.752160557 +0000 UTC m=+384.561991445" watchObservedRunningTime="2026-01-20 14:56:28.754765013 +0000 UTC m=+384.564595871" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.111501 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.111822 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.173413 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.338006 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.338281 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.388788 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.777420 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kmnv" Jan 20 14:56:32 crc kubenswrapper[4949]: I0120 14:56:32.780452 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xr695" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.533538 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.533893 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.572530 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.678192 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.678235 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.723991 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.795619 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cmxfz" Jan 20 14:56:34 crc kubenswrapper[4949]: I0120 14:56:34.796106 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-97g5q" Jan 20 14:56:52 crc kubenswrapper[4949]: I0120 14:56:52.392532 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" containerID="cri-o://acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" gracePeriod=30 Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.629744 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749399 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749817 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749886 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.749956 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750016 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750089 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750145 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.750178 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") pod \"595f245f-676f-4ef1-8073-5e235b4a338a\" (UID: \"595f245f-676f-4ef1-8073-5e235b4a338a\") " Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.751155 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.752004 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.756132 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.756260 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.756962 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.757738 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h" (OuterVolumeSpecName: "kube-api-access-knt8h") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "kube-api-access-knt8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.767066 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.778904 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "595f245f-676f-4ef1-8073-5e235b4a338a" (UID: "595f245f-676f-4ef1-8073-5e235b4a338a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852382 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852434 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knt8h\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-kube-api-access-knt8h\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852451 4949 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852463 4949 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595f245f-676f-4ef1-8073-5e235b4a338a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852475 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595f245f-676f-4ef1-8073-5e235b4a338a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852489 4949 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595f245f-676f-4ef1-8073-5e235b4a338a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.852501 4949 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595f245f-676f-4ef1-8073-5e235b4a338a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878285 4949 generic.go:334] "Generic (PLEG): container finished" podID="595f245f-676f-4ef1-8073-5e235b4a338a" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" exitCode=0 Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerDied","Data":"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f"} Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878360 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878382 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x8799" event={"ID":"595f245f-676f-4ef1-8073-5e235b4a338a","Type":"ContainerDied","Data":"75534abbef0ad3bbb82a5a368c94e3e3c976a84596ece27d24989708a9fa01e9"} Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.878399 4949 scope.go:117] "RemoveContainer" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.898932 4949 scope.go:117] "RemoveContainer" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" Jan 20 14:56:54 crc kubenswrapper[4949]: E0120 14:56:54.899794 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f\": container with ID starting with acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f not found: ID does not exist" containerID="acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.899879 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f"} err="failed to get container status \"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f\": rpc error: code = NotFound desc = could not find container \"acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f\": container with ID starting with acf1d85a2c3b70930e0eb3156fff791a0d5d5557a2f8402af0d995403eca7d0f not found: ID does not exist" Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.901884 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:56:54 crc kubenswrapper[4949]: I0120 14:56:54.907441 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x8799"] Jan 20 14:56:56 crc kubenswrapper[4949]: I0120 14:56:56.796921 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" path="/var/lib/kubelet/pods/595f245f-676f-4ef1-8073-5e235b4a338a/volumes" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.152442 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.152559 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.152634 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.153412 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.153510 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d" gracePeriod=600 Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.897216 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d" exitCode=0 Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.897324 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d"} Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.899972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259"} Jan 20 14:56:57 crc kubenswrapper[4949]: I0120 14:56:57.900069 4949 scope.go:117] "RemoveContainer" containerID="575dec1481462ef767d883009c953763cc58734bb9b2847643344d200074cd28" Jan 20 14:58:57 crc kubenswrapper[4949]: I0120 14:58:57.152930 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:58:57 crc kubenswrapper[4949]: I0120 14:58:57.153420 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:59:04 crc kubenswrapper[4949]: I0120 14:59:04.993351 4949 scope.go:117] "RemoveContainer" containerID="7d0993573a1d788b3633593094dda37a6358ba5deac428ae5f04766b6026d98a" Jan 20 14:59:27 crc kubenswrapper[4949]: I0120 14:59:27.152361 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:59:27 crc kubenswrapper[4949]: I0120 14:59:27.152937 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.152141 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.152776 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.152834 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.153602 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 14:59:57 crc kubenswrapper[4949]: I0120 14:59:57.153693 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259" gracePeriod=600 Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019197 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259" exitCode=0 Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259"} Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019888 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c"} Jan 20 14:59:58 crc kubenswrapper[4949]: I0120 14:59:58.019943 4949 scope.go:117] "RemoveContainer" containerID="c172336d898ec3740efe5e354114975d8e1616430213682de8603f7b5d86515d" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.171634 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:00:00 crc kubenswrapper[4949]: E0120 15:00:00.172772 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.172793 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.173017 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="595f245f-676f-4ef1-8073-5e235b4a338a" containerName="registry" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.174189 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.176284 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.177803 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.179856 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.190776 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.190924 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.190967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.293203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.293385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.293419 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.295180 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.301771 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.308335 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"collect-profiles-29482020-7x2fc\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.491651 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:00 crc kubenswrapper[4949]: I0120 15:00:00.682323 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:00:00 crc kubenswrapper[4949]: W0120 15:00:00.686083 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod591138ca_7bcb_4584_8089_82e6223d1457.slice/crio-0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d WatchSource:0}: Error finding container 0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d: Status 404 returned error can't find the container with id 0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.041653 4949 generic.go:334] "Generic (PLEG): container finished" podID="591138ca-7bcb-4584-8089-82e6223d1457" containerID="4ff5f836d3d163418d95ceb0986956f845ac79923a1ad3950a5ae54e3538d3fc" exitCode=0 Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.041737 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" event={"ID":"591138ca-7bcb-4584-8089-82e6223d1457","Type":"ContainerDied","Data":"4ff5f836d3d163418d95ceb0986956f845ac79923a1ad3950a5ae54e3538d3fc"} Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.041789 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" event={"ID":"591138ca-7bcb-4584-8089-82e6223d1457","Type":"ContainerStarted","Data":"0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d"} Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.958171 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9x9js"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.959308 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.961838 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.962042 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.962604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-p5pww" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.969069 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-k9xq5"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.970152 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.970158 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9x9js"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.974401 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-x98t6" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.987684 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wdg2b"] Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.988440 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.990686 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jfpsc" Jan 20 15:00:01 crc kubenswrapper[4949]: I0120 15:00:01.993842 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-k9xq5"] Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.016406 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc5kd\" (UniqueName: \"kubernetes.io/projected/512fc928-abb3-4353-9543-be5d35cd8ccd-kube-api-access-jc5kd\") pod \"cert-manager-webhook-687f57d79b-wdg2b\" (UID: \"512fc928-abb3-4353-9543-be5d35cd8ccd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.016461 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk4g7\" (UniqueName: \"kubernetes.io/projected/9cd775b9-2d07-40bb-964c-6e935aa6775a-kube-api-access-rk4g7\") pod \"cert-manager-cainjector-cf98fcc89-9x9js\" (UID: \"9cd775b9-2d07-40bb-964c-6e935aa6775a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.016493 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4rx\" (UniqueName: \"kubernetes.io/projected/1ca44809-a121-411d-8be6-f1a8b879b97f-kube-api-access-cm4rx\") pod \"cert-manager-858654f9db-k9xq5\" (UID: \"1ca44809-a121-411d-8be6-f1a8b879b97f\") " pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.025089 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wdg2b"] Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.117046 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk4g7\" (UniqueName: \"kubernetes.io/projected/9cd775b9-2d07-40bb-964c-6e935aa6775a-kube-api-access-rk4g7\") pod \"cert-manager-cainjector-cf98fcc89-9x9js\" (UID: \"9cd775b9-2d07-40bb-964c-6e935aa6775a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.117274 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4rx\" (UniqueName: \"kubernetes.io/projected/1ca44809-a121-411d-8be6-f1a8b879b97f-kube-api-access-cm4rx\") pod \"cert-manager-858654f9db-k9xq5\" (UID: \"1ca44809-a121-411d-8be6-f1a8b879b97f\") " pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.117325 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc5kd\" (UniqueName: \"kubernetes.io/projected/512fc928-abb3-4353-9543-be5d35cd8ccd-kube-api-access-jc5kd\") pod \"cert-manager-webhook-687f57d79b-wdg2b\" (UID: \"512fc928-abb3-4353-9543-be5d35cd8ccd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.139553 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk4g7\" (UniqueName: \"kubernetes.io/projected/9cd775b9-2d07-40bb-964c-6e935aa6775a-kube-api-access-rk4g7\") pod \"cert-manager-cainjector-cf98fcc89-9x9js\" (UID: \"9cd775b9-2d07-40bb-964c-6e935aa6775a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.140422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc5kd\" (UniqueName: \"kubernetes.io/projected/512fc928-abb3-4353-9543-be5d35cd8ccd-kube-api-access-jc5kd\") pod \"cert-manager-webhook-687f57d79b-wdg2b\" (UID: \"512fc928-abb3-4353-9543-be5d35cd8ccd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.174500 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4rx\" (UniqueName: \"kubernetes.io/projected/1ca44809-a121-411d-8be6-f1a8b879b97f-kube-api-access-cm4rx\") pod \"cert-manager-858654f9db-k9xq5\" (UID: \"1ca44809-a121-411d-8be6-f1a8b879b97f\") " pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.281353 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.283019 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.307751 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-k9xq5" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.327991 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.420354 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") pod \"591138ca-7bcb-4584-8089-82e6223d1457\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.420737 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") pod \"591138ca-7bcb-4584-8089-82e6223d1457\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.420787 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") pod \"591138ca-7bcb-4584-8089-82e6223d1457\" (UID: \"591138ca-7bcb-4584-8089-82e6223d1457\") " Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.421709 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume" (OuterVolumeSpecName: "config-volume") pod "591138ca-7bcb-4584-8089-82e6223d1457" (UID: "591138ca-7bcb-4584-8089-82e6223d1457"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.427664 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "591138ca-7bcb-4584-8089-82e6223d1457" (UID: "591138ca-7bcb-4584-8089-82e6223d1457"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.431044 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k" (OuterVolumeSpecName: "kube-api-access-tws4k") pod "591138ca-7bcb-4584-8089-82e6223d1457" (UID: "591138ca-7bcb-4584-8089-82e6223d1457"). InnerVolumeSpecName "kube-api-access-tws4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.502942 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9x9js"] Jan 20 15:00:02 crc kubenswrapper[4949]: W0120 15:00:02.513625 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd775b9_2d07_40bb_964c_6e935aa6775a.slice/crio-153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a WatchSource:0}: Error finding container 153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a: Status 404 returned error can't find the container with id 153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.516440 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.522241 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tws4k\" (UniqueName: \"kubernetes.io/projected/591138ca-7bcb-4584-8089-82e6223d1457-kube-api-access-tws4k\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.522278 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/591138ca-7bcb-4584-8089-82e6223d1457-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.522294 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/591138ca-7bcb-4584-8089-82e6223d1457-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.785859 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wdg2b"] Jan 20 15:00:02 crc kubenswrapper[4949]: I0120 15:00:02.842002 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-k9xq5"] Jan 20 15:00:02 crc kubenswrapper[4949]: W0120 15:00:02.844017 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ca44809_a121_411d_8be6_f1a8b879b97f.slice/crio-e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd WatchSource:0}: Error finding container e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd: Status 404 returned error can't find the container with id e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.065119 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-k9xq5" event={"ID":"1ca44809-a121-411d-8be6-f1a8b879b97f","Type":"ContainerStarted","Data":"e36d75bca981fe1d4cd99832edf34691b2c33538b53a11119dbdfa2cc09253dd"} Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.066567 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" event={"ID":"9cd775b9-2d07-40bb-964c-6e935aa6775a","Type":"ContainerStarted","Data":"153de12bbc12be8fa95947f68583b010f8b109a93ea41fe8394371a0ef744e6a"} Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.069737 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.069747 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc" event={"ID":"591138ca-7bcb-4584-8089-82e6223d1457","Type":"ContainerDied","Data":"0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d"} Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.069803 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a41044a31dd65417d80764368d0acdfd4c955c73320350a253258f7747ffa1d" Jan 20 15:00:03 crc kubenswrapper[4949]: I0120 15:00:03.071864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" event={"ID":"512fc928-abb3-4353-9543-be5d35cd8ccd","Type":"ContainerStarted","Data":"b2ef08dc2eef1aeeb1186c19d25617c0ec7238aba9efb2669767b7d5f22705c9"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.098465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-k9xq5" event={"ID":"1ca44809-a121-411d-8be6-f1a8b879b97f","Type":"ContainerStarted","Data":"ad02cc3732e8b65ef671b097d77719b491b40c6f1470dff6d3a65d8c6c422445"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.100857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" event={"ID":"9cd775b9-2d07-40bb-964c-6e935aa6775a","Type":"ContainerStarted","Data":"4beaba6dcab7b8f832b20753978fc52da156336c7d361e1775c8b8e9fd86000e"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.104072 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" event={"ID":"512fc928-abb3-4353-9543-be5d35cd8ccd","Type":"ContainerStarted","Data":"44e9f911114192e960f4f274f436b15d26a8e93053969735dd1f61d46c174dee"} Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.104147 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.120508 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-k9xq5" podStartSLOduration=3.06217718 podStartE2EDuration="7.12048422s" podCreationTimestamp="2026-01-20 15:00:01 +0000 UTC" firstStartedPulling="2026-01-20 15:00:02.845745876 +0000 UTC m=+598.655576734" lastFinishedPulling="2026-01-20 15:00:06.904052916 +0000 UTC m=+602.713883774" observedRunningTime="2026-01-20 15:00:08.114449094 +0000 UTC m=+603.924279952" watchObservedRunningTime="2026-01-20 15:00:08.12048422 +0000 UTC m=+603.930315088" Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.135266 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9x9js" podStartSLOduration=2.6860658710000003 podStartE2EDuration="7.135221278s" podCreationTimestamp="2026-01-20 15:00:01 +0000 UTC" firstStartedPulling="2026-01-20 15:00:02.51622176 +0000 UTC m=+598.326052618" lastFinishedPulling="2026-01-20 15:00:06.965377167 +0000 UTC m=+602.775208025" observedRunningTime="2026-01-20 15:00:08.134481024 +0000 UTC m=+603.944311882" watchObservedRunningTime="2026-01-20 15:00:08.135221278 +0000 UTC m=+603.945052136" Jan 20 15:00:08 crc kubenswrapper[4949]: I0120 15:00:08.153739 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" podStartSLOduration=3.037491508 podStartE2EDuration="7.153719528s" podCreationTimestamp="2026-01-20 15:00:01 +0000 UTC" firstStartedPulling="2026-01-20 15:00:02.788215629 +0000 UTC m=+598.598046477" lastFinishedPulling="2026-01-20 15:00:06.904443639 +0000 UTC m=+602.714274497" observedRunningTime="2026-01-20 15:00:08.149167291 +0000 UTC m=+603.958998169" watchObservedRunningTime="2026-01-20 15:00:08.153719528 +0000 UTC m=+603.963550406" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.560344 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561048 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" containerID="cri-o://8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561456 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" containerID="cri-o://4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561534 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" containerID="cri-o://acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561593 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" containerID="cri-o://2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561637 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561681 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" containerID="cri-o://88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.561718 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" containerID="cri-o://747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.600768 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" containerID="cri-o://c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" gracePeriod=30 Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.877552 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.880073 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-acl-logging/0.log" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.880792 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-controller/0.log" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.881200 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935453 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mxmf"] Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935686 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935702 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935710 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935718 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935726 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kubecfg-setup" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935732 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kubecfg-setup" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935738 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935744 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935751 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935759 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935769 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935775 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935782 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935790 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935803 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935811 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935819 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935826 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935836 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935843 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935852 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935860 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935866 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935873 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.935882 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591138ca-7bcb-4584-8089-82e6223d1457" containerName="collect-profiles" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935887 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="591138ca-7bcb-4584-8089-82e6223d1457" containerName="collect-profiles" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935972 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="nbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935982 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935991 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.935998 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="591138ca-7bcb-4584-8089-82e6223d1457" containerName="collect-profiles" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936004 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="kube-rbac-proxy-node" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936012 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936018 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936024 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-acl-logging" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936032 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovn-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936042 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="sbdb" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936048 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="northd" Jan 20 15:00:11 crc kubenswrapper[4949]: E0120 15:00:11.936124 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936132 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936215 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.936378 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerName="ovnkube-controller" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.937662 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.939687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-kubelet\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-config\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940217 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-script-lib\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940242 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovn-node-metrics-cert\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940259 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-etc-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940288 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-var-lib-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940303 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-bin\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940316 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-systemd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940339 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6h6\" (UniqueName: \"kubernetes.io/projected/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-kube-api-access-sx6h6\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940359 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-netns\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-slash\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940405 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940419 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-env-overrides\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940439 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-netd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-log-socket\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940469 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-systemd-units\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940552 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940581 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-ovn\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:11 crc kubenswrapper[4949]: I0120 15:00:11.940608 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-node-log\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041003 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041053 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041106 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket" (OuterVolumeSpecName: "log-socket") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041125 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log" (OuterVolumeSpecName: "node-log") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041235 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041620 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041644 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041692 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041717 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041732 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041735 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.041854 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042196 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042328 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042362 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042383 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042410 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042433 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042450 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042470 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042473 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042503 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042583 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042601 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042629 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042652 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") pod \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\" (UID: \"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04\") " Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042785 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042813 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash" (OuterVolumeSpecName: "host-slash") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042835 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042818 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042869 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042877 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042876 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-netd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042898 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042917 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-netd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042918 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-log-socket\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-log-socket\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042962 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-systemd-units\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.042998 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-ovn\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043002 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043030 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-systemd-units\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043048 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-node-log\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043057 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-ovn\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043077 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-kubelet\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-kubelet\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043104 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-node-log\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043140 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-config\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-script-lib\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovn-node-metrics-cert\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043241 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-etc-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043280 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-var-lib-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043304 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-bin\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-systemd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043365 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6h6\" (UniqueName: \"kubernetes.io/projected/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-kube-api-access-sx6h6\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043402 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-netns\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043431 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-slash\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043512 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043550 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-env-overrides\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043616 4949 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043629 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043641 4949 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043653 4949 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043664 4949 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-slash\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043675 4949 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043686 4949 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043696 4949 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043707 4949 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043717 4949 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043729 4949 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043740 4949 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-node-log\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043749 4949 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-log-socket\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043758 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043766 4949 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043774 4949 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043783 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043846 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-config\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.043887 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-systemd\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044186 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-env-overrides\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovnkube-script-lib\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044463 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-netns\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044492 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-run-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044525 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-var-lib-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044528 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-slash\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044560 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-cni-bin\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044544 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.044582 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-etc-openvswitch\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.046789 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb" (OuterVolumeSpecName: "kube-api-access-z9cmb") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "kube-api-access-z9cmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.047406 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.047993 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-ovn-node-metrics-cert\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.054960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" (UID: "775d7cfb-d5e3-457d-a7fa-4f0bdb752d04"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.060016 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6h6\" (UniqueName: \"kubernetes.io/projected/64a3b80e-47e3-4bd6-8f47-7160cb0ce59a-kube-api-access-sx6h6\") pod \"ovnkube-node-8mxmf\" (UID: \"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.126737 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127685 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/1.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127727 4949 generic.go:334] "Generic (PLEG): container finished" podID="3ac16078-f295-4f4b-875c-a8505e87b9da" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" exitCode=2 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127820 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerDied","Data":"8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.127873 4949 scope.go:117] "RemoveContainer" containerID="2288921d30687fa912bd30288e76476322a58a375d8e9e026d65474972541fe1" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.128334 4949 scope.go:117] "RemoveContainer" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.128484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2szcd_openshift-multus(3ac16078-f295-4f4b-875c-a8505e87b9da)\"" pod="openshift-multus/multus-2szcd" podUID="3ac16078-f295-4f4b-875c-a8505e87b9da" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.131243 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovnkube-controller/3.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.146740 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-acl-logging/0.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.147980 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6zd5_775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/ovn-controller/0.log" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.148042 4949 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.148071 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.148085 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9cmb\" (UniqueName: \"kubernetes.io/projected/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04-kube-api-access-z9cmb\") on node \"crc\" DevicePath \"\"" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149634 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149664 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149671 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149678 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149685 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149691 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" exitCode=0 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149697 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" exitCode=143 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149707 4949 generic.go:334] "Generic (PLEG): container finished" podID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" exitCode=143 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149737 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149793 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149805 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149752 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149834 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149909 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149915 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149921 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149926 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149931 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149935 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149940 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149945 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149950 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149957 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149965 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149973 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149978 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149984 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149989 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.149998 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150003 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150009 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150014 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150019 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150026 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150033 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150040 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150046 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150051 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150056 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150061 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150066 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150071 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150076 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150081 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150087 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6zd5" event={"ID":"775d7cfb-d5e3-457d-a7fa-4f0bdb752d04","Type":"ContainerDied","Data":"5dc152895067f752c82569c5577107d59af356358dbd2eb55b9818a3b6c13db7"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150095 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150107 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150114 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150119 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150125 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150130 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150135 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150140 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150145 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.150150 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.172625 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.191209 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.195948 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.197345 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6zd5"] Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.212108 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.224278 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.235252 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.245272 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.255109 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.259225 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.273200 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: W0120 15:00:12.280995 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a3b80e_47e3_4bd6_8f47_7160cb0ce59a.slice/crio-de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96 WatchSource:0}: Error finding container de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96: Status 404 returned error can't find the container with id de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96 Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.286676 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.306907 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321048 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.321535 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321566 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321585 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.321862 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321908 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.321942 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.322306 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322351 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322380 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.322817 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322842 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.322862 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.323108 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323126 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323137 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.323363 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323380 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323391 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.323727 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323746 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.323758 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.324074 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324098 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324114 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.324373 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324395 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324410 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: E0120 15:00:12.324646 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324679 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.324696 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325014 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325040 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325323 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325348 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325754 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.325775 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326014 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326089 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326801 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.326835 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327059 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327076 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327394 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327407 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327705 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.327733 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328035 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328057 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328367 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328394 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328920 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.328939 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330430 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330455 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330753 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.330770 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.331100 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wdg2b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.331787 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.331823 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332100 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332133 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332389 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332418 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332696 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332718 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332942 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.332973 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.333169 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.333189 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.334443 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.334473 4949 scope.go:117] "RemoveContainer" containerID="c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335449 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52"} err="failed to get container status \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": rpc error: code = NotFound desc = could not find container \"c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52\": container with ID starting with c5253e253163dc30e16960579de46fee713a73a3f84de84111fd54ea316b2e52 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335481 4949 scope.go:117] "RemoveContainer" containerID="5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335843 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb"} err="failed to get container status \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": rpc error: code = NotFound desc = could not find container \"5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb\": container with ID starting with 5f40807c0a0c6621526a6e937bb23d84dfabe3f36871b6100d9a687d18e520fb not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.335896 4949 scope.go:117] "RemoveContainer" containerID="4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336141 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449"} err="failed to get container status \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": rpc error: code = NotFound desc = could not find container \"4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449\": container with ID starting with 4291b1c4b5f8aaa8a90597eab270725cd685789ab7f534b0ce3b4ed129ad7449 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336164 4949 scope.go:117] "RemoveContainer" containerID="acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336379 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4"} err="failed to get container status \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": rpc error: code = NotFound desc = could not find container \"acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4\": container with ID starting with acc6352d8c872616c4160730020597e8e1ca032ef28c208ba4c8a83607e83df4 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336413 4949 scope.go:117] "RemoveContainer" containerID="2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336642 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b"} err="failed to get container status \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": rpc error: code = NotFound desc = could not find container \"2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b\": container with ID starting with 2a0c74f1e2c7d24c53856aef43452a8f296a00428b31019ff65ea00262b2272b not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336664 4949 scope.go:117] "RemoveContainer" containerID="1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336881 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa"} err="failed to get container status \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": rpc error: code = NotFound desc = could not find container \"1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa\": container with ID starting with 1d22fbf921445f9f96f6f26ac071c2d2f65418f25e75f4707f08e747deb695aa not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.336910 4949 scope.go:117] "RemoveContainer" containerID="88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337101 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97"} err="failed to get container status \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": rpc error: code = NotFound desc = could not find container \"88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97\": container with ID starting with 88f8550b9a18a3bda17682f506ea87ad9c8f8b16591819aca363c427e3b35c97 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337130 4949 scope.go:117] "RemoveContainer" containerID="747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337912 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5"} err="failed to get container status \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": rpc error: code = NotFound desc = could not find container \"747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5\": container with ID starting with 747309e6f720c73ebea3764c246267047f0c96a488f4a2c2a6720e5c1b7a9ad5 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.337936 4949 scope.go:117] "RemoveContainer" containerID="8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.338151 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d"} err="failed to get container status \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": rpc error: code = NotFound desc = could not find container \"8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d\": container with ID starting with 8c6ba6cbca08dbc7cca78d5e9b997204566c00d689f6c4846474731a3a071f8d not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.338172 4949 scope.go:117] "RemoveContainer" containerID="17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.338311 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6"} err="failed to get container status \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": rpc error: code = NotFound desc = could not find container \"17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6\": container with ID starting with 17b8a9e3724a6a5d2d383f7d68f1235361d17cb7ddb025cedda11d57636130f6 not found: ID does not exist" Jan 20 15:00:12 crc kubenswrapper[4949]: I0120 15:00:12.796815 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775d7cfb-d5e3-457d-a7fa-4f0bdb752d04" path="/var/lib/kubelet/pods/775d7cfb-d5e3-457d-a7fa-4f0bdb752d04/volumes" Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.169225 4949 generic.go:334] "Generic (PLEG): container finished" podID="64a3b80e-47e3-4bd6-8f47-7160cb0ce59a" containerID="d654df5c84f5b2ba92addf90bcf5db7b22a94ee32472c1967777228a107239a5" exitCode=0 Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.169338 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerDied","Data":"d654df5c84f5b2ba92addf90bcf5db7b22a94ee32472c1967777228a107239a5"} Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.172642 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"de7c111f054ef0c79ba48d5ac80613b123268feccb3d5f627a608cb4b8450d96"} Jan 20 15:00:13 crc kubenswrapper[4949]: I0120 15:00:13.173878 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185239 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"75545bf5c5d489625d6146bc8bc5966b32a56e4ebee01f914356c6c5f29fb55f"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185592 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"d5a17be3922c3109b28464d39203204a98bae74e4b68c2432360d34ea83a712b"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185609 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"55612c2f7ee955d25515e5f2b52fd81d49cf35c205a2291122cff2fb3776dccc"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185623 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"8f57cff25330b0b8ce4cdecb03ff1186e2126efdc56965070a3ad99ac8edd72c"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185634 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"7cd650029e29248c26efe1acd4969723fab1106a7a59cc2d2e365c21eca6fecb"} Jan 20 15:00:14 crc kubenswrapper[4949]: I0120 15:00:14.185644 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"c7e96f449c72106e35be484c9382fcba51defce2274710087da32d7339ae7e1e"} Jan 20 15:00:16 crc kubenswrapper[4949]: I0120 15:00:16.198569 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"95ba337532e1704dbee06ce9dd09e1bcea1a7c62fa214acd28762ad0901cd526"} Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.218629 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" event={"ID":"64a3b80e-47e3-4bd6-8f47-7160cb0ce59a","Type":"ContainerStarted","Data":"087f068568e694b85285a6c161905a7896010bd0870614500689cf988e5fda07"} Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.218988 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.219006 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.245322 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:19 crc kubenswrapper[4949]: I0120 15:00:19.255939 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" podStartSLOduration=8.255921811 podStartE2EDuration="8.255921811s" podCreationTimestamp="2026-01-20 15:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:00:19.247489147 +0000 UTC m=+615.057320005" watchObservedRunningTime="2026-01-20 15:00:19.255921811 +0000 UTC m=+615.065752669" Jan 20 15:00:20 crc kubenswrapper[4949]: I0120 15:00:20.231365 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:20 crc kubenswrapper[4949]: I0120 15:00:20.268038 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:22 crc kubenswrapper[4949]: I0120 15:00:22.788739 4949 scope.go:117] "RemoveContainer" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" Jan 20 15:00:22 crc kubenswrapper[4949]: E0120 15:00:22.789856 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2szcd_openshift-multus(3ac16078-f295-4f4b-875c-a8505e87b9da)\"" pod="openshift-multus/multus-2szcd" podUID="3ac16078-f295-4f4b-875c-a8505e87b9da" Jan 20 15:00:37 crc kubenswrapper[4949]: I0120 15:00:37.789666 4949 scope.go:117] "RemoveContainer" containerID="8a7b4e0505c42d2e716d5c8feb5239c3103927623d8259c89225529765049470" Jan 20 15:00:38 crc kubenswrapper[4949]: I0120 15:00:38.338362 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:00:38 crc kubenswrapper[4949]: I0120 15:00:38.338726 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2szcd" event={"ID":"3ac16078-f295-4f4b-875c-a8505e87b9da","Type":"ContainerStarted","Data":"ffabd8ff2e0b25be4ba66141518acd8b6b9068f3e3a92e9fd03df65a83adc54c"} Jan 20 15:00:42 crc kubenswrapper[4949]: I0120 15:00:42.282318 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mxmf" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.089215 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk"] Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.092836 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.094858 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk"] Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.095370 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.170597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.171278 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.171325 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.272904 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.272993 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.273065 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.273470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.273611 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.291622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.437630 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:00:58 crc kubenswrapper[4949]: I0120 15:00:58.635718 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk"] Jan 20 15:00:59 crc kubenswrapper[4949]: I0120 15:00:59.461960 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerID="5b51387318628027a96aa5844ef249d0d94c69dea3e6fbcd48dfb3d440c9ec7c" exitCode=0 Jan 20 15:00:59 crc kubenswrapper[4949]: I0120 15:00:59.462040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"5b51387318628027a96aa5844ef249d0d94c69dea3e6fbcd48dfb3d440c9ec7c"} Jan 20 15:00:59 crc kubenswrapper[4949]: I0120 15:00:59.462102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerStarted","Data":"dbb172ee0a0e40161087e744b459a19cec475c33b20de78c28ef79c5599e95c9"} Jan 20 15:01:01 crc kubenswrapper[4949]: I0120 15:01:01.475143 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerID="cf9bb695a1350c9ceec799d4f88bacf7b8002989afc6a8e95bff9847a6fc9823" exitCode=0 Jan 20 15:01:01 crc kubenswrapper[4949]: I0120 15:01:01.475204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"cf9bb695a1350c9ceec799d4f88bacf7b8002989afc6a8e95bff9847a6fc9823"} Jan 20 15:01:02 crc kubenswrapper[4949]: I0120 15:01:02.487237 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerID="5fb640316408983836ba4faaf12805ccea2809df987579f1dbb61aca30eb0631" exitCode=0 Jan 20 15:01:02 crc kubenswrapper[4949]: I0120 15:01:02.487281 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"5fb640316408983836ba4faaf12805ccea2809df987579f1dbb61aca30eb0631"} Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.756597 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839049 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") pod \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839155 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") pod \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839245 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") pod \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\" (UID: \"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06\") " Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.839757 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle" (OuterVolumeSpecName: "bundle") pod "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" (UID: "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.844349 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm" (OuterVolumeSpecName: "kube-api-access-rpvmm") pod "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" (UID: "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06"). InnerVolumeSpecName "kube-api-access-rpvmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.853415 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util" (OuterVolumeSpecName: "util") pod "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" (UID: "3f63e0ce-f0ce-434d-b9f5-b0695dba0b06"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.940469 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpvmm\" (UniqueName: \"kubernetes.io/projected/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-kube-api-access-rpvmm\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.940545 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-util\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:03 crc kubenswrapper[4949]: I0120 15:01:03.940566 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f63e0ce-f0ce-434d-b9f5-b0695dba0b06-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:04 crc kubenswrapper[4949]: I0120 15:01:04.501718 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" event={"ID":"3f63e0ce-f0ce-434d-b9f5-b0695dba0b06","Type":"ContainerDied","Data":"dbb172ee0a0e40161087e744b459a19cec475c33b20de78c28ef79c5599e95c9"} Jan 20 15:01:04 crc kubenswrapper[4949]: I0120 15:01:04.501767 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk" Jan 20 15:01:04 crc kubenswrapper[4949]: I0120 15:01:04.501780 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb172ee0a0e40161087e744b459a19cec475c33b20de78c28ef79c5599e95c9" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.776739 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsrwb"] Jan 20 15:01:06 crc kubenswrapper[4949]: E0120 15:01:06.777250 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="extract" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777264 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="extract" Jan 20 15:01:06 crc kubenswrapper[4949]: E0120 15:01:06.777280 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="util" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777286 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="util" Jan 20 15:01:06 crc kubenswrapper[4949]: E0120 15:01:06.777295 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="pull" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777302 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="pull" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777389 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f63e0ce-f0ce-434d-b9f5-b0695dba0b06" containerName="extract" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.777765 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.779837 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.779971 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4gdxk" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.780016 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.798229 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsrwb"] Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.879635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fnfp\" (UniqueName: \"kubernetes.io/projected/b2bfb1bf-1717-4d51-9632-204856f869f4-kube-api-access-7fnfp\") pod \"nmstate-operator-646758c888-jsrwb\" (UID: \"b2bfb1bf-1717-4d51-9632-204856f869f4\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:06 crc kubenswrapper[4949]: I0120 15:01:06.981722 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fnfp\" (UniqueName: \"kubernetes.io/projected/b2bfb1bf-1717-4d51-9632-204856f869f4-kube-api-access-7fnfp\") pod \"nmstate-operator-646758c888-jsrwb\" (UID: \"b2bfb1bf-1717-4d51-9632-204856f869f4\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.009642 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fnfp\" (UniqueName: \"kubernetes.io/projected/b2bfb1bf-1717-4d51-9632-204856f869f4-kube-api-access-7fnfp\") pod \"nmstate-operator-646758c888-jsrwb\" (UID: \"b2bfb1bf-1717-4d51-9632-204856f869f4\") " pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.096283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.288069 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jsrwb"] Jan 20 15:01:07 crc kubenswrapper[4949]: W0120 15:01:07.297685 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2bfb1bf_1717_4d51_9632_204856f869f4.slice/crio-1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5 WatchSource:0}: Error finding container 1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5: Status 404 returned error can't find the container with id 1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5 Jan 20 15:01:07 crc kubenswrapper[4949]: I0120 15:01:07.516686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" event={"ID":"b2bfb1bf-1717-4d51-9632-204856f869f4","Type":"ContainerStarted","Data":"1aa1c851904fa1ebb402eba7ebf014592070049ec76e8f6a1d333a342a9805d5"} Jan 20 15:01:10 crc kubenswrapper[4949]: I0120 15:01:10.540090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" event={"ID":"b2bfb1bf-1717-4d51-9632-204856f869f4","Type":"ContainerStarted","Data":"bc4f499c09fee86ecd739660816ce7aab9d3965845fc5a784be91cb8045556ec"} Jan 20 15:01:10 crc kubenswrapper[4949]: I0120 15:01:10.559624 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-jsrwb" podStartSLOduration=1.954863482 podStartE2EDuration="4.559607298s" podCreationTimestamp="2026-01-20 15:01:06 +0000 UTC" firstStartedPulling="2026-01-20 15:01:07.300292023 +0000 UTC m=+663.110122881" lastFinishedPulling="2026-01-20 15:01:09.905035839 +0000 UTC m=+665.714866697" observedRunningTime="2026-01-20 15:01:10.55682702 +0000 UTC m=+666.366657878" watchObservedRunningTime="2026-01-20 15:01:10.559607298 +0000 UTC m=+666.369438156" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.527278 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bz62x"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.528126 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.529769 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jvqm8" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.542898 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.543954 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.548883 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.554084 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bz62x"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.567584 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ndwpd"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.568429 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.573033 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652250 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-nmstate-lock\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652307 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8gb\" (UniqueName: \"kubernetes.io/projected/248f6a09-0064-4d9f-a4d7-13a92b06ee72-kube-api-access-fr8gb\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652337 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-ovs-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652375 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75q4\" (UniqueName: \"kubernetes.io/projected/71837cd3-c24a-4d86-b59f-28330f7d2809-kube-api-access-h75q4\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652417 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652443 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m76vj\" (UniqueName: \"kubernetes.io/projected/696be671-724b-4447-ba02-730dd10fc489-kube-api-access-m76vj\") pod \"nmstate-metrics-54757c584b-bz62x\" (UID: \"696be671-724b-4447-ba02-730dd10fc489\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.652464 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-dbus-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.660191 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.660838 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.664903 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.664923 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.665238 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bsvwp" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.680031 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753234 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8gb\" (UniqueName: \"kubernetes.io/projected/248f6a09-0064-4d9f-a4d7-13a92b06ee72-kube-api-access-fr8gb\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-ovs-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753300 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a366383-883e-4f7e-b656-d23eb0fe6294-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753331 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h75q4\" (UniqueName: \"kubernetes.io/projected/71837cd3-c24a-4d86-b59f-28330f7d2809-kube-api-access-h75q4\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bt7v\" (UniqueName: \"kubernetes.io/projected/7a366383-883e-4f7e-b656-d23eb0fe6294-kube-api-access-5bt7v\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753399 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m76vj\" (UniqueName: \"kubernetes.io/projected/696be671-724b-4447-ba02-730dd10fc489-kube-api-access-m76vj\") pod \"nmstate-metrics-54757c584b-bz62x\" (UID: \"696be671-724b-4447-ba02-730dd10fc489\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753419 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-dbus-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753459 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a366383-883e-4f7e-b656-d23eb0fe6294-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753479 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-nmstate-lock\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753562 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-nmstate-lock\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: E0120 15:01:11.753604 4949 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 20 15:01:11 crc kubenswrapper[4949]: E0120 15:01:11.753671 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair podName:71837cd3-c24a-4d86-b59f-28330f7d2809 nodeName:}" failed. No retries permitted until 2026-01-20 15:01:12.253649786 +0000 UTC m=+668.063480654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-twsz5" (UID: "71837cd3-c24a-4d86-b59f-28330f7d2809") : secret "openshift-nmstate-webhook" not found Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.753913 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-ovs-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.754135 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/248f6a09-0064-4d9f-a4d7-13a92b06ee72-dbus-socket\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.777297 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m76vj\" (UniqueName: \"kubernetes.io/projected/696be671-724b-4447-ba02-730dd10fc489-kube-api-access-m76vj\") pod \"nmstate-metrics-54757c584b-bz62x\" (UID: \"696be671-724b-4447-ba02-730dd10fc489\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.785247 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h75q4\" (UniqueName: \"kubernetes.io/projected/71837cd3-c24a-4d86-b59f-28330f7d2809-kube-api-access-h75q4\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.789238 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8gb\" (UniqueName: \"kubernetes.io/projected/248f6a09-0064-4d9f-a4d7-13a92b06ee72-kube-api-access-fr8gb\") pod \"nmstate-handler-ndwpd\" (UID: \"248f6a09-0064-4d9f-a4d7-13a92b06ee72\") " pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.846329 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.846406 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bbcc9b596-78qpx"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.847280 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.854719 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bt7v\" (UniqueName: \"kubernetes.io/projected/7a366383-883e-4f7e-b656-d23eb0fe6294-kube-api-access-5bt7v\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.855167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a366383-883e-4f7e-b656-d23eb0fe6294-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.855214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a366383-883e-4f7e-b656-d23eb0fe6294-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.856192 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a366383-883e-4f7e-b656-d23eb0fe6294-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.873436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a366383-883e-4f7e-b656-d23eb0fe6294-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.880765 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bt7v\" (UniqueName: \"kubernetes.io/projected/7a366383-883e-4f7e-b656-d23eb0fe6294-kube-api-access-5bt7v\") pod \"nmstate-console-plugin-7754f76f8b-vt2ng\" (UID: \"7a366383-883e-4f7e-b656-d23eb0fe6294\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.888254 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.916649 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcc9b596-78qpx"] Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956338 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-trusted-ca-bundle\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872g9\" (UniqueName: \"kubernetes.io/projected/21f26abc-c431-4136-94c2-af8e66f624a3-kube-api-access-872g9\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956423 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-oauth-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956550 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-service-ca\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956574 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-console-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956590 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-oauth-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.956616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:11 crc kubenswrapper[4949]: I0120 15:01:11.975025 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.047966 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-bz62x"] Jan 20 15:01:12 crc kubenswrapper[4949]: W0120 15:01:12.057710 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696be671_724b_4447_ba02_730dd10fc489.slice/crio-1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12 WatchSource:0}: Error finding container 1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12: Status 404 returned error can't find the container with id 1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12 Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.057895 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872g9\" (UniqueName: \"kubernetes.io/projected/21f26abc-c431-4136-94c2-af8e66f624a3-kube-api-access-872g9\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.058000 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-oauth-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.058076 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-service-ca\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059260 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-service-ca\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-console-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059864 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-oauth-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.059880 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-oauth-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.060016 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.060081 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-trusted-ca-bundle\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.060948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-trusted-ca-bundle\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.061552 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21f26abc-c431-4136-94c2-af8e66f624a3-console-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.064160 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-serving-cert\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.065344 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21f26abc-c431-4136-94c2-af8e66f624a3-console-oauth-config\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.074892 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872g9\" (UniqueName: \"kubernetes.io/projected/21f26abc-c431-4136-94c2-af8e66f624a3-kube-api-access-872g9\") pod \"console-bbcc9b596-78qpx\" (UID: \"21f26abc-c431-4136-94c2-af8e66f624a3\") " pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: W0120 15:01:12.151184 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a366383_883e_4f7e_b656_d23eb0fe6294.slice/crio-bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940 WatchSource:0}: Error finding container bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940: Status 404 returned error can't find the container with id bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940 Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.151943 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng"] Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.216785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.262153 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.268612 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/71837cd3-c24a-4d86-b59f-28330f7d2809-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-twsz5\" (UID: \"71837cd3-c24a-4d86-b59f-28330f7d2809\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.434702 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bbcc9b596-78qpx"] Jan 20 15:01:12 crc kubenswrapper[4949]: W0120 15:01:12.437491 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f26abc_c431_4136_94c2_af8e66f624a3.slice/crio-98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f WatchSource:0}: Error finding container 98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f: Status 404 returned error can't find the container with id 98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.471841 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.556208 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcc9b596-78qpx" event={"ID":"21f26abc-c431-4136-94c2-af8e66f624a3","Type":"ContainerStarted","Data":"98746eff6ddc2f57c6cd4c945a6151e2972edd97d9fec90eabe49f6de2878e1f"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.557151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ndwpd" event={"ID":"248f6a09-0064-4d9f-a4d7-13a92b06ee72","Type":"ContainerStarted","Data":"0d886b19e97a080a5a84ba7d91d52cf62c368da8a2e4033a66a6800cba5d3d64"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.558024 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" event={"ID":"7a366383-883e-4f7e-b656-d23eb0fe6294","Type":"ContainerStarted","Data":"bcf1f6dfd2dce655044a4cf4df53abc87fbc8f4f5383579576c232b102ef8940"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.558867 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" event={"ID":"696be671-724b-4447-ba02-730dd10fc489","Type":"ContainerStarted","Data":"1ccdb7fb4f1222fec0e1cb0e324e4133e519a01fa97406622612ac0480d1db12"} Jan 20 15:01:12 crc kubenswrapper[4949]: I0120 15:01:12.649164 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5"] Jan 20 15:01:13 crc kubenswrapper[4949]: I0120 15:01:13.568829 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bbcc9b596-78qpx" event={"ID":"21f26abc-c431-4136-94c2-af8e66f624a3","Type":"ContainerStarted","Data":"4511b802b4c91c97ca61c7e9f49a532767b00c330949bb672dce5f0421c45aa6"} Jan 20 15:01:13 crc kubenswrapper[4949]: I0120 15:01:13.570355 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" event={"ID":"71837cd3-c24a-4d86-b59f-28330f7d2809","Type":"ContainerStarted","Data":"7c9c29bf669f95ba397dd954775c84e5b489f9a129c32fd6a62d3af40eaae06b"} Jan 20 15:01:13 crc kubenswrapper[4949]: I0120 15:01:13.591448 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bbcc9b596-78qpx" podStartSLOduration=2.5914033869999997 podStartE2EDuration="2.591403387s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:01:13.589355032 +0000 UTC m=+669.399185900" watchObservedRunningTime="2026-01-20 15:01:13.591403387 +0000 UTC m=+669.401234245" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.582204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" event={"ID":"7a366383-883e-4f7e-b656-d23eb0fe6294","Type":"ContainerStarted","Data":"b45ee6437aaa11af35ef9a8e6859327c64e1e615bd02af5f249bd3676dbb9a9c"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.585035 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" event={"ID":"696be671-724b-4447-ba02-730dd10fc489","Type":"ContainerStarted","Data":"59e9842866e46aee0c354e986556ee76d2c048e1085726f48092e4e184d33f64"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.586054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" event={"ID":"71837cd3-c24a-4d86-b59f-28330f7d2809","Type":"ContainerStarted","Data":"146316af6a9ed3c54fca36d5fd02f5dde1a1f38bde36279909e6566bf3f0fbe4"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.586411 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.587964 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ndwpd" event={"ID":"248f6a09-0064-4d9f-a4d7-13a92b06ee72","Type":"ContainerStarted","Data":"c8cb9df19580dc5e2aa20327e006450d2f1bc93c11a4be8004a8d50994197f92"} Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.588373 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.602007 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-vt2ng" podStartSLOduration=2.206048656 podStartE2EDuration="4.601977719s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:12.154154177 +0000 UTC m=+667.963985035" lastFinishedPulling="2026-01-20 15:01:14.55008324 +0000 UTC m=+670.359914098" observedRunningTime="2026-01-20 15:01:15.599142769 +0000 UTC m=+671.408973627" watchObservedRunningTime="2026-01-20 15:01:15.601977719 +0000 UTC m=+671.411808627" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.615937 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" podStartSLOduration=2.712243099 podStartE2EDuration="4.615920051s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:12.651864511 +0000 UTC m=+668.461695369" lastFinishedPulling="2026-01-20 15:01:14.555541423 +0000 UTC m=+670.365372321" observedRunningTime="2026-01-20 15:01:15.614022171 +0000 UTC m=+671.423853029" watchObservedRunningTime="2026-01-20 15:01:15.615920051 +0000 UTC m=+671.425750919" Jan 20 15:01:15 crc kubenswrapper[4949]: I0120 15:01:15.630510 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ndwpd" podStartSLOduration=1.987427292 podStartE2EDuration="4.630493794s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:11.914792335 +0000 UTC m=+667.724623193" lastFinishedPulling="2026-01-20 15:01:14.557858837 +0000 UTC m=+670.367689695" observedRunningTime="2026-01-20 15:01:15.628297724 +0000 UTC m=+671.438128762" watchObservedRunningTime="2026-01-20 15:01:15.630493794 +0000 UTC m=+671.440324652" Jan 20 15:01:17 crc kubenswrapper[4949]: I0120 15:01:17.601394 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" event={"ID":"696be671-724b-4447-ba02-730dd10fc489","Type":"ContainerStarted","Data":"43732728b63cf52bad6242f99b19d88f3170665a888d7cd6f11991e3f738aa68"} Jan 20 15:01:21 crc kubenswrapper[4949]: I0120 15:01:21.921870 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ndwpd" Jan 20 15:01:21 crc kubenswrapper[4949]: I0120 15:01:21.944680 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-bz62x" podStartSLOduration=6.090691673 podStartE2EDuration="10.944664239s" podCreationTimestamp="2026-01-20 15:01:11 +0000 UTC" firstStartedPulling="2026-01-20 15:01:12.060329841 +0000 UTC m=+667.870160699" lastFinishedPulling="2026-01-20 15:01:16.914302407 +0000 UTC m=+672.724133265" observedRunningTime="2026-01-20 15:01:17.626623728 +0000 UTC m=+673.436454586" watchObservedRunningTime="2026-01-20 15:01:21.944664239 +0000 UTC m=+677.754495097" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.218178 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.218892 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.226132 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.640407 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bbcc9b596-78qpx" Jan 20 15:01:22 crc kubenswrapper[4949]: I0120 15:01:22.713083 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 15:01:32 crc kubenswrapper[4949]: I0120 15:01:32.481559 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-twsz5" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.112199 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr"] Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.113997 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.116169 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.124090 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr"] Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.255094 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.255136 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.255424 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.356792 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.356889 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.356912 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.357412 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.357745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.382422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.429440 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.682716 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr"] Jan 20 15:01:46 crc kubenswrapper[4949]: I0120 15:01:46.779995 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerStarted","Data":"8ef34c083c82a0a0741f808e34b832498cc30623690d5a9580192e2ff2002181"} Jan 20 15:01:47 crc kubenswrapper[4949]: I0120 15:01:47.775384 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-w9d9r" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" containerID="cri-o://203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" gracePeriod=15 Jan 20 15:01:47 crc kubenswrapper[4949]: I0120 15:01:47.788189 4949 generic.go:334] "Generic (PLEG): container finished" podID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerID="4957e0b30f586ade878532ac3515259b4d4e851e6da96f31c3fec1bd774823ed" exitCode=0 Jan 20 15:01:47 crc kubenswrapper[4949]: I0120 15:01:47.788245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"4957e0b30f586ade878532ac3515259b4d4e851e6da96f31c3fec1bd774823ed"} Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.222028 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w9d9r_37539dae-2103-4b6c-871c-48b0c35a1850/console/0.log" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.222274 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386006 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386118 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386157 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386213 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.386317 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.387475 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.387542 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") pod \"37539dae-2103-4b6c-871c-48b0c35a1850\" (UID: \"37539dae-2103-4b6c-871c-48b0c35a1850\") " Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.387796 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config" (OuterVolumeSpecName: "console-config") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca" (OuterVolumeSpecName: "service-ca") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388507 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388838 4949 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388866 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.388492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.390980 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.391008 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.391394 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp" (OuterVolumeSpecName: "kube-api-access-kcfsp") pod "37539dae-2103-4b6c-871c-48b0c35a1850" (UID: "37539dae-2103-4b6c-871c-48b0c35a1850"). InnerVolumeSpecName "kube-api-access-kcfsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490309 4949 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490557 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539dae-2103-4b6c-871c-48b0c35a1850-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490667 4949 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490751 4949 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37539dae-2103-4b6c-871c-48b0c35a1850-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.490824 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcfsp\" (UniqueName: \"kubernetes.io/projected/37539dae-2103-4b6c-871c-48b0c35a1850-kube-api-access-kcfsp\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.797117 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w9d9r_37539dae-2103-4b6c-871c-48b0c35a1850/console/0.log" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.797170 4949 generic.go:334] "Generic (PLEG): container finished" podID="37539dae-2103-4b6c-871c-48b0c35a1850" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" exitCode=2 Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.797236 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w9d9r" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.805946 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerDied","Data":"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721"} Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.805981 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w9d9r" event={"ID":"37539dae-2103-4b6c-871c-48b0c35a1850","Type":"ContainerDied","Data":"f4877eaf97bd7c4d0e52e4fddc8cae7a451b37b3fd251230d5ececd8dac1c70e"} Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.806000 4949 scope.go:117] "RemoveContainer" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.830714 4949 scope.go:117] "RemoveContainer" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" Jan 20 15:01:48 crc kubenswrapper[4949]: E0120 15:01:48.831573 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721\": container with ID starting with 203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721 not found: ID does not exist" containerID="203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.831663 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721"} err="failed to get container status \"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721\": rpc error: code = NotFound desc = could not find container \"203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721\": container with ID starting with 203312a5bb9b927647fa964ee03df05ed1b9e9445527b7f0ff77efc934119721 not found: ID does not exist" Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.844343 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 15:01:48 crc kubenswrapper[4949]: I0120 15:01:48.848672 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-w9d9r"] Jan 20 15:01:49 crc kubenswrapper[4949]: I0120 15:01:49.811867 4949 generic.go:334] "Generic (PLEG): container finished" podID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerID="4c4cabaf73dfcbd9c156986045d5cf50fe6169932361f99b8511714d0215e960" exitCode=0 Jan 20 15:01:49 crc kubenswrapper[4949]: I0120 15:01:49.811958 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"4c4cabaf73dfcbd9c156986045d5cf50fe6169932361f99b8511714d0215e960"} Jan 20 15:01:50 crc kubenswrapper[4949]: I0120 15:01:50.798556 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" path="/var/lib/kubelet/pods/37539dae-2103-4b6c-871c-48b0c35a1850/volumes" Jan 20 15:01:50 crc kubenswrapper[4949]: I0120 15:01:50.824044 4949 generic.go:334] "Generic (PLEG): container finished" podID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerID="21db6d2d22952f7c3040634cd81293adc0ed863a2ba69becd94d1c5a829477cb" exitCode=0 Jan 20 15:01:50 crc kubenswrapper[4949]: I0120 15:01:50.824093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"21db6d2d22952f7c3040634cd81293adc0ed863a2ba69becd94d1c5a829477cb"} Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.093415 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.251508 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") pod \"21202f95-d312-47b4-988f-4cd0a9dac08e\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.251582 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") pod \"21202f95-d312-47b4-988f-4cd0a9dac08e\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.251728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") pod \"21202f95-d312-47b4-988f-4cd0a9dac08e\" (UID: \"21202f95-d312-47b4-988f-4cd0a9dac08e\") " Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.253179 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle" (OuterVolumeSpecName: "bundle") pod "21202f95-d312-47b4-988f-4cd0a9dac08e" (UID: "21202f95-d312-47b4-988f-4cd0a9dac08e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.264455 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv" (OuterVolumeSpecName: "kube-api-access-gtcvv") pod "21202f95-d312-47b4-988f-4cd0a9dac08e" (UID: "21202f95-d312-47b4-988f-4cd0a9dac08e"). InnerVolumeSpecName "kube-api-access-gtcvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.272559 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util" (OuterVolumeSpecName: "util") pod "21202f95-d312-47b4-988f-4cd0a9dac08e" (UID: "21202f95-d312-47b4-988f-4cd0a9dac08e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.353370 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.353407 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtcvv\" (UniqueName: \"kubernetes.io/projected/21202f95-d312-47b4-988f-4cd0a9dac08e-kube-api-access-gtcvv\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.353420 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21202f95-d312-47b4-988f-4cd0a9dac08e-util\") on node \"crc\" DevicePath \"\"" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.844328 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" event={"ID":"21202f95-d312-47b4-988f-4cd0a9dac08e","Type":"ContainerDied","Data":"8ef34c083c82a0a0741f808e34b832498cc30623690d5a9580192e2ff2002181"} Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.844377 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef34c083c82a0a0741f808e34b832498cc30623690d5a9580192e2ff2002181" Jan 20 15:01:52 crc kubenswrapper[4949]: I0120 15:01:52.844383 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr" Jan 20 15:01:57 crc kubenswrapper[4949]: I0120 15:01:57.152265 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:01:57 crc kubenswrapper[4949]: I0120 15:01:57.152882 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.491987 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl"] Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492584 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="extract" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492599 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="extract" Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492618 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="pull" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492627 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="pull" Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492640 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="util" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492649 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="util" Jan 20 15:02:01 crc kubenswrapper[4949]: E0120 15:02:01.492662 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492669 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492799 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="37539dae-2103-4b6c-871c-48b0c35a1850" containerName="console" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.492813 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="21202f95-d312-47b4-988f-4cd0a9dac08e" containerName="extract" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.493272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.495088 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.495832 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.495840 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.497334 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.499367 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2ntrl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.505371 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl"] Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.571381 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-webhook-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.571431 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/aab28d03-013d-4f55-8f5d-4452aa51ae0b-kube-api-access-9mvd6\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.571466 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-apiservice-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.672890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-webhook-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.672938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/aab28d03-013d-4f55-8f5d-4452aa51ae0b-kube-api-access-9mvd6\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.672971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-apiservice-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.678407 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-apiservice-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.678886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aab28d03-013d-4f55-8f5d-4452aa51ae0b-webhook-cert\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.696646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvd6\" (UniqueName: \"kubernetes.io/projected/aab28d03-013d-4f55-8f5d-4452aa51ae0b-kube-api-access-9mvd6\") pod \"metallb-operator-controller-manager-7949cdb884-qwqpl\" (UID: \"aab28d03-013d-4f55-8f5d-4452aa51ae0b\") " pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.748888 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm"] Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.749710 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.755153 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.755192 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g658z" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.755249 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.763810 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm"] Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.810795 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.875777 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-apiservice-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.876297 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-webhook-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.876434 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/418359eb-1dea-4f02-9964-9ab810e3bc09-kube-api-access-2kkrk\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.977346 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-apiservice-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.977399 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-webhook-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.977426 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/418359eb-1dea-4f02-9964-9ab810e3bc09-kube-api-access-2kkrk\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.992441 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-webhook-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.992885 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/418359eb-1dea-4f02-9964-9ab810e3bc09-apiservice-cert\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:01 crc kubenswrapper[4949]: I0120 15:02:01.996155 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/418359eb-1dea-4f02-9964-9ab810e3bc09-kube-api-access-2kkrk\") pod \"metallb-operator-webhook-server-598fc6787c-lklkm\" (UID: \"418359eb-1dea-4f02-9964-9ab810e3bc09\") " pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.065916 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.076593 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl"] Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.483496 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm"] Jan 20 15:02:02 crc kubenswrapper[4949]: W0120 15:02:02.491683 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418359eb_1dea_4f02_9964_9ab810e3bc09.slice/crio-b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce WatchSource:0}: Error finding container b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce: Status 404 returned error can't find the container with id b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.913439 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" event={"ID":"aab28d03-013d-4f55-8f5d-4452aa51ae0b","Type":"ContainerStarted","Data":"2da35e27e23c661d374a95e2f369f88ff8ad4b5b61794dc6ef136f820018cb7b"} Jan 20 15:02:02 crc kubenswrapper[4949]: I0120 15:02:02.914511 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" event={"ID":"418359eb-1dea-4f02-9964-9ab810e3bc09","Type":"ContainerStarted","Data":"b678d55ba06d84513bd8ee56bd71f5ea53df1d15e8ac9b3ae06b57ac9be806ce"} Jan 20 15:02:05 crc kubenswrapper[4949]: I0120 15:02:05.931807 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" event={"ID":"aab28d03-013d-4f55-8f5d-4452aa51ae0b","Type":"ContainerStarted","Data":"e779d556b8a5d0b8149abc656c5078b13a7ae8b898c2e76d33f7473f0dc59c56"} Jan 20 15:02:05 crc kubenswrapper[4949]: I0120 15:02:05.932777 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:05 crc kubenswrapper[4949]: I0120 15:02:05.955347 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" podStartSLOduration=1.655204922 podStartE2EDuration="4.95532863s" podCreationTimestamp="2026-01-20 15:02:01 +0000 UTC" firstStartedPulling="2026-01-20 15:02:02.097007239 +0000 UTC m=+717.906838097" lastFinishedPulling="2026-01-20 15:02:05.397130947 +0000 UTC m=+721.206961805" observedRunningTime="2026-01-20 15:02:05.953696459 +0000 UTC m=+721.763527337" watchObservedRunningTime="2026-01-20 15:02:05.95532863 +0000 UTC m=+721.765159508" Jan 20 15:02:09 crc kubenswrapper[4949]: I0120 15:02:09.958363 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" event={"ID":"418359eb-1dea-4f02-9964-9ab810e3bc09","Type":"ContainerStarted","Data":"9dd1923c007087a728220d7a810a22bddb0350ec484e36c06558c22b3ddb24ff"} Jan 20 15:02:09 crc kubenswrapper[4949]: I0120 15:02:09.959174 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:09 crc kubenswrapper[4949]: I0120 15:02:09.983116 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" podStartSLOduration=1.962679622 podStartE2EDuration="8.983099814s" podCreationTimestamp="2026-01-20 15:02:01 +0000 UTC" firstStartedPulling="2026-01-20 15:02:02.494564307 +0000 UTC m=+718.304395165" lastFinishedPulling="2026-01-20 15:02:09.514984499 +0000 UTC m=+725.324815357" observedRunningTime="2026-01-20 15:02:09.978605932 +0000 UTC m=+725.788436790" watchObservedRunningTime="2026-01-20 15:02:09.983099814 +0000 UTC m=+725.792930672" Jan 20 15:02:22 crc kubenswrapper[4949]: I0120 15:02:22.071236 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-598fc6787c-lklkm" Jan 20 15:02:27 crc kubenswrapper[4949]: I0120 15:02:27.152212 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:02:27 crc kubenswrapper[4949]: I0120 15:02:27.152879 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:02:37 crc kubenswrapper[4949]: I0120 15:02:37.956875 4949 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 15:02:41 crc kubenswrapper[4949]: I0120 15:02:41.814117 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7949cdb884-qwqpl" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.641532 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.642400 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.644914 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.648993 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gdn5t" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.659173 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hg78r"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.661565 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.663760 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.664015 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.666437 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698270 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-sockets\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698344 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9787b339-5a35-4568-8ea4-12b8904efd8a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698386 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-conf\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics-certs\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-startup\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698490 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698549 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2fq\" (UniqueName: \"kubernetes.io/projected/a4a159b5-92c1-4221-9b9d-ef46eda1afca-kube-api-access-2h2fq\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698584 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58c6\" (UniqueName: \"kubernetes.io/projected/9787b339-5a35-4568-8ea4-12b8904efd8a-kube-api-access-f58c6\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.698615 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-reloader\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.731606 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-znbk6"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.732826 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.735200 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.735604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dkdfl" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.735989 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.738943 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.753036 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-n6txw"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.753880 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.757057 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.773986 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n6txw"] Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-cert\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799849 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-sockets\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9787b339-5a35-4568-8ea4-12b8904efd8a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799916 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-conf\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799943 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.799983 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics-certs\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8rl\" (UniqueName: \"kubernetes.io/projected/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-kube-api-access-rr8rl\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800044 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-startup\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800069 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-metrics-certs\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800089 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800105 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2fq\" (UniqueName: \"kubernetes.io/projected/a4a159b5-92c1-4221-9b9d-ef46eda1afca-kube-api-access-2h2fq\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800154 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e00f603c-93d1-4941-908a-26fdf24da7b7-metallb-excludel2\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800190 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58c6\" (UniqueName: \"kubernetes.io/projected/9787b339-5a35-4568-8ea4-12b8904efd8a-kube-api-access-f58c6\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jscpg\" (UniqueName: \"kubernetes.io/projected/e00f603c-93d1-4941-908a-26fdf24da7b7-kube-api-access-jscpg\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800229 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-reloader\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800348 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-sockets\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-conf\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.800845 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-reloader\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.801102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.801902 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4a159b5-92c1-4221-9b9d-ef46eda1afca-frr-startup\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.806793 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9787b339-5a35-4568-8ea4-12b8904efd8a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.816830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4a159b5-92c1-4221-9b9d-ef46eda1afca-metrics-certs\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.817318 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2fq\" (UniqueName: \"kubernetes.io/projected/a4a159b5-92c1-4221-9b9d-ef46eda1afca-kube-api-access-2h2fq\") pod \"frr-k8s-hg78r\" (UID: \"a4a159b5-92c1-4221-9b9d-ef46eda1afca\") " pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.843589 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58c6\" (UniqueName: \"kubernetes.io/projected/9787b339-5a35-4568-8ea4-12b8904efd8a-kube-api-access-f58c6\") pod \"frr-k8s-webhook-server-7df86c4f6c-87tfc\" (UID: \"9787b339-5a35-4568-8ea4-12b8904efd8a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr8rl\" (UniqueName: \"kubernetes.io/projected/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-kube-api-access-rr8rl\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900926 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-metrics-certs\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.900986 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e00f603c-93d1-4941-908a-26fdf24da7b7-metallb-excludel2\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.901002 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jscpg\" (UniqueName: \"kubernetes.io/projected/e00f603c-93d1-4941-908a-26fdf24da7b7-kube-api-access-jscpg\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.901025 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-cert\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.901060 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901143 4949 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901190 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist podName:e00f603c-93d1-4941-908a-26fdf24da7b7 nodeName:}" failed. No retries permitted until 2026-01-20 15:02:43.401175421 +0000 UTC m=+759.211006279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist") pod "speaker-znbk6" (UID: "e00f603c-93d1-4941-908a-26fdf24da7b7") : secret "metallb-memberlist" not found Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901406 4949 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 20 15:02:42 crc kubenswrapper[4949]: E0120 15:02:42.901451 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs podName:e00f603c-93d1-4941-908a-26fdf24da7b7 nodeName:}" failed. No retries permitted until 2026-01-20 15:02:43.40143885 +0000 UTC m=+759.211269708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs") pod "speaker-znbk6" (UID: "e00f603c-93d1-4941-908a-26fdf24da7b7") : secret "speaker-certs-secret" not found Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.902160 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e00f603c-93d1-4941-908a-26fdf24da7b7-metallb-excludel2\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.903962 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.906493 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-metrics-certs\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.915239 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-cert\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.916924 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jscpg\" (UniqueName: \"kubernetes.io/projected/e00f603c-93d1-4941-908a-26fdf24da7b7-kube-api-access-jscpg\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.917772 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr8rl\" (UniqueName: \"kubernetes.io/projected/b76ab7ec-a978-4aea-bc88-b2a82bc54e14-kube-api-access-rr8rl\") pod \"controller-6968d8fdc4-n6txw\" (UID: \"b76ab7ec-a978-4aea-bc88-b2a82bc54e14\") " pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.980152 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:42 crc kubenswrapper[4949]: I0120 15:02:42.989611 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.069109 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.188188 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc"] Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.407222 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.407291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:43 crc kubenswrapper[4949]: E0120 15:02:43.407395 4949 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 20 15:02:43 crc kubenswrapper[4949]: E0120 15:02:43.407459 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist podName:e00f603c-93d1-4941-908a-26fdf24da7b7 nodeName:}" failed. No retries permitted until 2026-01-20 15:02:44.407441347 +0000 UTC m=+760.217272205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist") pod "speaker-znbk6" (UID: "e00f603c-93d1-4941-908a-26fdf24da7b7") : secret "metallb-memberlist" not found Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.413014 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-metrics-certs\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:43 crc kubenswrapper[4949]: I0120 15:02:43.478983 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n6txw"] Jan 20 15:02:43 crc kubenswrapper[4949]: W0120 15:02:43.486702 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb76ab7ec_a978_4aea_bc88_b2a82bc54e14.slice/crio-a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c WatchSource:0}: Error finding container a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c: Status 404 returned error can't find the container with id a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.147468 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"1a8ca8ed654a4ed23a7199281dfe8c28f7779bc9de631fa052cff859c4a974d3"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.149859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n6txw" event={"ID":"b76ab7ec-a978-4aea-bc88-b2a82bc54e14","Type":"ContainerStarted","Data":"56120539a840af478f8ce0cc422d4d207d2bedcda9ca66873ccbfa87454a7ec1"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.149900 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n6txw" event={"ID":"b76ab7ec-a978-4aea-bc88-b2a82bc54e14","Type":"ContainerStarted","Data":"a6f20372f2ee663cb08321684807af692c11f2586ef2edd5dcab79e5ef6d275c"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.150846 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" event={"ID":"9787b339-5a35-4568-8ea4-12b8904efd8a","Type":"ContainerStarted","Data":"2ee89685ddd2cbc321aa48c2155cd209678033cfcc14ba66082eacf348a7d80e"} Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.421765 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.427854 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e00f603c-93d1-4941-908a-26fdf24da7b7-memberlist\") pod \"speaker-znbk6\" (UID: \"e00f603c-93d1-4941-908a-26fdf24da7b7\") " pod="metallb-system/speaker-znbk6" Jan 20 15:02:44 crc kubenswrapper[4949]: I0120 15:02:44.550430 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-znbk6" Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.159709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n6txw" event={"ID":"b76ab7ec-a978-4aea-bc88-b2a82bc54e14","Type":"ContainerStarted","Data":"3f75978820f7a48290b98bf141393e817a82264b49e2dbe838b4a6bbd7ad8135"} Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.160555 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.165426 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-znbk6" event={"ID":"e00f603c-93d1-4941-908a-26fdf24da7b7","Type":"ContainerStarted","Data":"638fc6e53b428108a25abf17c10c92723b88acf2678a175224d79a74839c45cd"} Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.165460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-znbk6" event={"ID":"e00f603c-93d1-4941-908a-26fdf24da7b7","Type":"ContainerStarted","Data":"04e92b5666ffcc0b80a5cc2d3b5b7fe5b13b97e93c7ffbd6fefc175c3329e433"} Jan 20 15:02:45 crc kubenswrapper[4949]: I0120 15:02:45.179884 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-n6txw" podStartSLOduration=3.179863456 podStartE2EDuration="3.179863456s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:45.175114276 +0000 UTC m=+760.984945134" watchObservedRunningTime="2026-01-20 15:02:45.179863456 +0000 UTC m=+760.989694314" Jan 20 15:02:46 crc kubenswrapper[4949]: I0120 15:02:46.176930 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-znbk6" event={"ID":"e00f603c-93d1-4941-908a-26fdf24da7b7","Type":"ContainerStarted","Data":"816749ed41e1d8e97bfe50da9f6c70dd7cb5d977d7c5b218159fca35074ab4b6"} Jan 20 15:02:46 crc kubenswrapper[4949]: I0120 15:02:46.177277 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-znbk6" Jan 20 15:02:46 crc kubenswrapper[4949]: I0120 15:02:46.199670 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-znbk6" podStartSLOduration=4.199648728 podStartE2EDuration="4.199648728s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:02:46.194117602 +0000 UTC m=+762.003948480" watchObservedRunningTime="2026-01-20 15:02:46.199648728 +0000 UTC m=+762.009479596" Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.209166 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" event={"ID":"9787b339-5a35-4568-8ea4-12b8904efd8a","Type":"ContainerStarted","Data":"c083be359aebfb79c5be61260cbf497aceb3680ada636dba8f4c8dcd0a8de545"} Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.210534 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.220269 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4a159b5-92c1-4221-9b9d-ef46eda1afca" containerID="aa083f2b10e79b846e6404f8535b0c376e53fe6ab0ee9c1d7b0c15ae24bf62c2" exitCode=0 Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.220347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerDied","Data":"aa083f2b10e79b846e6404f8535b0c376e53fe6ab0ee9c1d7b0c15ae24bf62c2"} Jan 20 15:02:50 crc kubenswrapper[4949]: I0120 15:02:50.230078 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" podStartSLOduration=1.429542177 podStartE2EDuration="8.230059905s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="2026-01-20 15:02:43.200026299 +0000 UTC m=+759.009857147" lastFinishedPulling="2026-01-20 15:02:50.000544007 +0000 UTC m=+765.810374875" observedRunningTime="2026-01-20 15:02:50.229121596 +0000 UTC m=+766.038952474" watchObservedRunningTime="2026-01-20 15:02:50.230059905 +0000 UTC m=+766.039890763" Jan 20 15:02:51 crc kubenswrapper[4949]: I0120 15:02:51.227110 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4a159b5-92c1-4221-9b9d-ef46eda1afca" containerID="a0b7ba436ec550fe7ed707983b5ee58da70d66a8db6fe84257a0fb6896f0612c" exitCode=0 Jan 20 15:02:51 crc kubenswrapper[4949]: I0120 15:02:51.227203 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerDied","Data":"a0b7ba436ec550fe7ed707983b5ee58da70d66a8db6fe84257a0fb6896f0612c"} Jan 20 15:02:52 crc kubenswrapper[4949]: I0120 15:02:52.233685 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4a159b5-92c1-4221-9b9d-ef46eda1afca" containerID="e149c2e8e0f3a8987f7a3a0e34e5d146c5d9dca17cb635897c080c3fb7825071" exitCode=0 Jan 20 15:02:52 crc kubenswrapper[4949]: I0120 15:02:52.233732 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerDied","Data":"e149c2e8e0f3a8987f7a3a0e34e5d146c5d9dca17cb635897c080c3fb7825071"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.074075 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-n6txw" Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.245927 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"a28c88a7a7cfa1ec5486f63fc72aebda4a3dd1f79cae1002984030ff7b6bfb5d"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.246204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"23bc1730eb87179f1dfade96fb381da25b1b652afe5003e60a3fdadfaaf0ad4e"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.246215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"fa82ad50fa2d2bbe933c9566d0f3e58fd75dad0479de1af361d6f93827f100e5"} Jan 20 15:02:53 crc kubenswrapper[4949]: I0120 15:02:53.246224 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"94f903c07094965b94f780b75ecf48c575de79738df3e346ad5ac9f709656d31"} Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.257472 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"472453ccb55a102b1a8554bb294b19796fc890a1fd6dfa6006d2390ef8a12e2b"} Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.257549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hg78r" event={"ID":"a4a159b5-92c1-4221-9b9d-ef46eda1afca","Type":"ContainerStarted","Data":"690e67cd8810c5695565baec2f3c508d079030e2793d4d26cca191b759698556"} Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.257699 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.280056 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hg78r" podStartSLOduration=5.568454737 podStartE2EDuration="12.280040535s" podCreationTimestamp="2026-01-20 15:02:42 +0000 UTC" firstStartedPulling="2026-01-20 15:02:43.270152633 +0000 UTC m=+759.079983491" lastFinishedPulling="2026-01-20 15:02:49.981738431 +0000 UTC m=+765.791569289" observedRunningTime="2026-01-20 15:02:54.278098713 +0000 UTC m=+770.087929571" watchObservedRunningTime="2026-01-20 15:02:54.280040535 +0000 UTC m=+770.089871393" Jan 20 15:02:54 crc kubenswrapper[4949]: I0120 15:02:54.554118 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-znbk6" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.152795 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.152896 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.152972 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.153900 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.154031 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c" gracePeriod=600 Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.654703 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.655844 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.657368 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lc72c" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.657979 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.660201 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.673181 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.829243 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"openstack-operator-index-vlncl\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.930245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"openstack-operator-index-vlncl\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.949321 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"openstack-operator-index-vlncl\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.971203 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:02:57 crc kubenswrapper[4949]: I0120 15:02:57.990241 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.046100 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.224884 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.294874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerStarted","Data":"21e6d542117cc9d67c8aa8bc925382b2cf3dc1ec20c1296d73849b576dc55ff3"} Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.297672 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c" exitCode=0 Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.299637 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c"} Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.299702 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf"} Jan 20 15:02:58 crc kubenswrapper[4949]: I0120 15:02:58.299722 4949 scope.go:117] "RemoveContainer" containerID="359b6f5a49d6a6e2642b92337fd3d2324d2c040119d7a907a4687e9fab57b259" Jan 20 15:03:00 crc kubenswrapper[4949]: I0120 15:03:00.326000 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerStarted","Data":"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de"} Jan 20 15:03:00 crc kubenswrapper[4949]: I0120 15:03:00.351651 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vlncl" podStartSLOduration=1.607974749 podStartE2EDuration="3.351626846s" podCreationTimestamp="2026-01-20 15:02:57 +0000 UTC" firstStartedPulling="2026-01-20 15:02:58.240697521 +0000 UTC m=+774.050528379" lastFinishedPulling="2026-01-20 15:02:59.984349618 +0000 UTC m=+775.794180476" observedRunningTime="2026-01-20 15:03:00.345428399 +0000 UTC m=+776.155259337" watchObservedRunningTime="2026-01-20 15:03:00.351626846 +0000 UTC m=+776.161457714" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.041746 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.642898 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nf5l6"] Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.643668 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.651793 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nf5l6"] Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.737841 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds88k\" (UniqueName: \"kubernetes.io/projected/a06c3c7b-913e-412e-833e-fcd7df154877-kube-api-access-ds88k\") pod \"openstack-operator-index-nf5l6\" (UID: \"a06c3c7b-913e-412e-833e-fcd7df154877\") " pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.839235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds88k\" (UniqueName: \"kubernetes.io/projected/a06c3c7b-913e-412e-833e-fcd7df154877-kube-api-access-ds88k\") pod \"openstack-operator-index-nf5l6\" (UID: \"a06c3c7b-913e-412e-833e-fcd7df154877\") " pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.864103 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds88k\" (UniqueName: \"kubernetes.io/projected/a06c3c7b-913e-412e-833e-fcd7df154877-kube-api-access-ds88k\") pod \"openstack-operator-index-nf5l6\" (UID: \"a06c3c7b-913e-412e-833e-fcd7df154877\") " pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:01 crc kubenswrapper[4949]: I0120 15:03:01.957737 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.337091 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vlncl" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" containerID="cri-o://a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" gracePeriod=2 Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.388560 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nf5l6"] Jan 20 15:03:02 crc kubenswrapper[4949]: W0120 15:03:02.460392 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda06c3c7b_913e_412e_833e_fcd7df154877.slice/crio-cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9 WatchSource:0}: Error finding container cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9: Status 404 returned error can't find the container with id cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9 Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.657336 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.766347 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") pod \"d0e5c180-d948-4012-b87d-f3da18868659\" (UID: \"d0e5c180-d948-4012-b87d-f3da18868659\") " Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.773349 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml" (OuterVolumeSpecName: "kube-api-access-vtsml") pod "d0e5c180-d948-4012-b87d-f3da18868659" (UID: "d0e5c180-d948-4012-b87d-f3da18868659"). InnerVolumeSpecName "kube-api-access-vtsml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.868161 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtsml\" (UniqueName: \"kubernetes.io/projected/d0e5c180-d948-4012-b87d-f3da18868659-kube-api-access-vtsml\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.985494 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-87tfc" Jan 20 15:03:02 crc kubenswrapper[4949]: I0120 15:03:02.993553 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hg78r" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345086 4949 generic.go:334] "Generic (PLEG): container finished" podID="d0e5c180-d948-4012-b87d-f3da18868659" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" exitCode=0 Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345149 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerDied","Data":"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345468 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vlncl" event={"ID":"d0e5c180-d948-4012-b87d-f3da18868659","Type":"ContainerDied","Data":"21e6d542117cc9d67c8aa8bc925382b2cf3dc1ec20c1296d73849b576dc55ff3"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345505 4949 scope.go:117] "RemoveContainer" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.345168 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vlncl" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.347871 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nf5l6" event={"ID":"a06c3c7b-913e-412e-833e-fcd7df154877","Type":"ContainerStarted","Data":"e88d3aed6e25929dbd093f20fc973583fb5e8d1915ae5ecb926b7c66a2da715c"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.347927 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nf5l6" event={"ID":"a06c3c7b-913e-412e-833e-fcd7df154877","Type":"ContainerStarted","Data":"cf1961746bad0bb98719700f82fd84d4f30ac6d181ed687ec60b5125685429f9"} Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.364206 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.370924 4949 scope.go:117] "RemoveContainer" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.371212 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vlncl"] Jan 20 15:03:03 crc kubenswrapper[4949]: E0120 15:03:03.371444 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de\": container with ID starting with a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de not found: ID does not exist" containerID="a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.371566 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de"} err="failed to get container status \"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de\": rpc error: code = NotFound desc = could not find container \"a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de\": container with ID starting with a059d4e5e9b430186b71829a9e91de21c8c4e88b396a3ee299bd8ed3406d77de not found: ID does not exist" Jan 20 15:03:03 crc kubenswrapper[4949]: I0120 15:03:03.383211 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nf5l6" podStartSLOduration=2.333116699 podStartE2EDuration="2.383188947s" podCreationTimestamp="2026-01-20 15:03:01 +0000 UTC" firstStartedPulling="2026-01-20 15:03:02.464316076 +0000 UTC m=+778.274146934" lastFinishedPulling="2026-01-20 15:03:02.514388324 +0000 UTC m=+778.324219182" observedRunningTime="2026-01-20 15:03:03.375865975 +0000 UTC m=+779.185696833" watchObservedRunningTime="2026-01-20 15:03:03.383188947 +0000 UTC m=+779.193019815" Jan 20 15:03:04 crc kubenswrapper[4949]: I0120 15:03:04.804973 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e5c180-d948-4012-b87d-f3da18868659" path="/var/lib/kubelet/pods/d0e5c180-d948-4012-b87d-f3da18868659/volumes" Jan 20 15:03:11 crc kubenswrapper[4949]: I0120 15:03:11.958638 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:11 crc kubenswrapper[4949]: I0120 15:03:11.959067 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:11 crc kubenswrapper[4949]: I0120 15:03:11.982183 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.433353 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nf5l6" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.885485 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt"] Jan 20 15:03:12 crc kubenswrapper[4949]: E0120 15:03:12.885741 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.885760 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.885891 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e5c180-d948-4012-b87d-f3da18868659" containerName="registry-server" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.886677 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.888282 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bn5t6" Jan 20 15:03:12 crc kubenswrapper[4949]: I0120 15:03:12.896529 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt"] Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.008886 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.009136 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.009210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.110593 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.110861 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.111037 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.111336 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.111427 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.135363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.212480 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:13 crc kubenswrapper[4949]: I0120 15:03:13.648236 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt"] Jan 20 15:03:13 crc kubenswrapper[4949]: W0120 15:03:13.653612 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeb9083f_e7f3_412d_9322_122ad5dcaaf6.slice/crio-abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2 WatchSource:0}: Error finding container abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2: Status 404 returned error can't find the container with id abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2 Jan 20 15:03:14 crc kubenswrapper[4949]: I0120 15:03:14.421998 4949 generic.go:334] "Generic (PLEG): container finished" podID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerID="ad5c83338439259468d606281ac993a876bc28579e8521af3cf681fdb9ccd198" exitCode=0 Jan 20 15:03:14 crc kubenswrapper[4949]: I0120 15:03:14.422113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"ad5c83338439259468d606281ac993a876bc28579e8521af3cf681fdb9ccd198"} Jan 20 15:03:14 crc kubenswrapper[4949]: I0120 15:03:14.422386 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerStarted","Data":"abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2"} Jan 20 15:03:15 crc kubenswrapper[4949]: I0120 15:03:15.434175 4949 generic.go:334] "Generic (PLEG): container finished" podID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerID="b8e2f4c871bb26abe03346eb055e2fd315374c1955b91f431271319d451d52d4" exitCode=0 Jan 20 15:03:15 crc kubenswrapper[4949]: I0120 15:03:15.434272 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"b8e2f4c871bb26abe03346eb055e2fd315374c1955b91f431271319d451d52d4"} Jan 20 15:03:16 crc kubenswrapper[4949]: I0120 15:03:16.444424 4949 generic.go:334] "Generic (PLEG): container finished" podID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerID="326f41a6ddc6bb06c08af6d69d58fa5aedff7ec32bf061a8f18c0d1ff3a3f3f7" exitCode=0 Jan 20 15:03:16 crc kubenswrapper[4949]: I0120 15:03:16.444550 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"326f41a6ddc6bb06c08af6d69d58fa5aedff7ec32bf061a8f18c0d1ff3a3f3f7"} Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.676565 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773117 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") pod \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773184 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") pod \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") pod \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\" (UID: \"beb9083f-e7f3-412d-9322-122ad5dcaaf6\") " Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.773900 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle" (OuterVolumeSpecName: "bundle") pod "beb9083f-e7f3-412d-9322-122ad5dcaaf6" (UID: "beb9083f-e7f3-412d-9322-122ad5dcaaf6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.774365 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.778981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p" (OuterVolumeSpecName: "kube-api-access-wkm9p") pod "beb9083f-e7f3-412d-9322-122ad5dcaaf6" (UID: "beb9083f-e7f3-412d-9322-122ad5dcaaf6"). InnerVolumeSpecName "kube-api-access-wkm9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.788176 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util" (OuterVolumeSpecName: "util") pod "beb9083f-e7f3-412d-9322-122ad5dcaaf6" (UID: "beb9083f-e7f3-412d-9322-122ad5dcaaf6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.875694 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkm9p\" (UniqueName: \"kubernetes.io/projected/beb9083f-e7f3-412d-9322-122ad5dcaaf6-kube-api-access-wkm9p\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:17 crc kubenswrapper[4949]: I0120 15:03:17.875732 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/beb9083f-e7f3-412d-9322-122ad5dcaaf6-util\") on node \"crc\" DevicePath \"\"" Jan 20 15:03:18 crc kubenswrapper[4949]: I0120 15:03:18.460340 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" event={"ID":"beb9083f-e7f3-412d-9322-122ad5dcaaf6","Type":"ContainerDied","Data":"abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2"} Jan 20 15:03:18 crc kubenswrapper[4949]: I0120 15:03:18.460387 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abb81b95122c667141f35fb0101b97254c372926e1c24220b5ef0f5034160ab2" Jan 20 15:03:18 crc kubenswrapper[4949]: I0120 15:03:18.460497 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438043 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj"] Jan 20 15:03:25 crc kubenswrapper[4949]: E0120 15:03:25.438561 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="pull" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438575 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="pull" Jan 20 15:03:25 crc kubenswrapper[4949]: E0120 15:03:25.438586 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="util" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438592 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="util" Jan 20 15:03:25 crc kubenswrapper[4949]: E0120 15:03:25.438608 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="extract" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438614 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="extract" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.438747 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb9083f-e7f3-412d-9322-122ad5dcaaf6" containerName="extract" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.439148 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.442047 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-6tl2v" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.443574 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj"] Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.584692 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstxg\" (UniqueName: \"kubernetes.io/projected/fa13f464-1245-4c7e-ba74-47e65076c9d1-kube-api-access-nstxg\") pod \"openstack-operator-controller-init-647bfc4c5c-8vnrj\" (UID: \"fa13f464-1245-4c7e-ba74-47e65076c9d1\") " pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.685915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nstxg\" (UniqueName: \"kubernetes.io/projected/fa13f464-1245-4c7e-ba74-47e65076c9d1-kube-api-access-nstxg\") pod \"openstack-operator-controller-init-647bfc4c5c-8vnrj\" (UID: \"fa13f464-1245-4c7e-ba74-47e65076c9d1\") " pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.703860 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nstxg\" (UniqueName: \"kubernetes.io/projected/fa13f464-1245-4c7e-ba74-47e65076c9d1-kube-api-access-nstxg\") pod \"openstack-operator-controller-init-647bfc4c5c-8vnrj\" (UID: \"fa13f464-1245-4c7e-ba74-47e65076c9d1\") " pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:25 crc kubenswrapper[4949]: I0120 15:03:25.757755 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:26 crc kubenswrapper[4949]: I0120 15:03:26.250035 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj"] Jan 20 15:03:26 crc kubenswrapper[4949]: I0120 15:03:26.508716 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" event={"ID":"fa13f464-1245-4c7e-ba74-47e65076c9d1","Type":"ContainerStarted","Data":"58e8aa88e359c4df7771fd55a5fd36b41e4d73281397a2132711cc924ff5a1cd"} Jan 20 15:03:31 crc kubenswrapper[4949]: I0120 15:03:31.544742 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" event={"ID":"fa13f464-1245-4c7e-ba74-47e65076c9d1","Type":"ContainerStarted","Data":"8f4d0e3069b3145f5b5f51c4e3dae970ac14431f2290d416f8271e7754cf0904"} Jan 20 15:03:31 crc kubenswrapper[4949]: I0120 15:03:31.545575 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:31 crc kubenswrapper[4949]: I0120 15:03:31.580884 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" podStartSLOduration=2.087686345 podStartE2EDuration="6.580846504s" podCreationTimestamp="2026-01-20 15:03:25 +0000 UTC" firstStartedPulling="2026-01-20 15:03:26.266379386 +0000 UTC m=+802.076210244" lastFinishedPulling="2026-01-20 15:03:30.759539525 +0000 UTC m=+806.569370403" observedRunningTime="2026-01-20 15:03:31.573358036 +0000 UTC m=+807.383188894" watchObservedRunningTime="2026-01-20 15:03:31.580846504 +0000 UTC m=+807.390677402" Jan 20 15:03:35 crc kubenswrapper[4949]: I0120 15:03:35.760989 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-647bfc4c5c-8vnrj" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.579369 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.580791 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.582979 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.583080 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-46qzz" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.583733 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.588624 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nzgqj" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.593911 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.594805 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.599543 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.601030 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dxkp8" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.611934 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.619267 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-m9grk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.620294 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.622610 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-78bc6" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.640100 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.650042 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-m9grk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.664511 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.665366 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.670065 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-fjz6j" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.684638 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696028 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bftbm\" (UniqueName: \"kubernetes.io/projected/5eae4c51-3e86-4153-8c26-d4c51b2f1331-kube-api-access-bftbm\") pod \"glance-operator-controller-manager-c6994669c-m9grk\" (UID: \"5eae4c51-3e86-4153-8c26-d4c51b2f1331\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lm6\" (UniqueName: \"kubernetes.io/projected/070f7ba5-a528-4316-8484-4ea82fb70a40-kube-api-access-28lm6\") pod \"barbican-operator-controller-manager-7ddb5c749-jzl6b\" (UID: \"070f7ba5-a528-4316-8484-4ea82fb70a40\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696124 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw98x\" (UniqueName: \"kubernetes.io/projected/070a47eb-d68f-4208-86eb-a99f0a9ce5df-kube-api-access-bw98x\") pod \"designate-operator-controller-manager-9f958b845-vhsdx\" (UID: \"070a47eb-d68f-4208-86eb-a99f0a9ce5df\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696149 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kck6x\" (UniqueName: \"kubernetes.io/projected/c44d3483-738b-4aab-a4a2-1478480b6330-kube-api-access-kck6x\") pod \"cinder-operator-controller-manager-9b68f5989-vll8p\" (UID: \"c44d3483-738b-4aab-a4a2-1478480b6330\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.696206 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwz7f\" (UniqueName: \"kubernetes.io/projected/e60d05a5-d1d5-4959-843b-654aaf547bca-kube-api-access-gwz7f\") pod \"heat-operator-controller-manager-594c8c9d5d-jxnlk\" (UID: \"e60d05a5-d1d5-4959-843b-654aaf547bca\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.703171 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.704785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.709287 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5t25b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.714402 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.721714 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.723157 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.726790 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.727022 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cl6dx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.734174 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.741840 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.742702 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.745977 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-tcs88" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.756035 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.756918 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.762202 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lb2fw" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.762415 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.765917 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.766829 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.786000 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gvf4w" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.798939 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw98x\" (UniqueName: \"kubernetes.io/projected/070a47eb-d68f-4208-86eb-a99f0a9ce5df-kube-api-access-bw98x\") pod \"designate-operator-controller-manager-9f958b845-vhsdx\" (UID: \"070a47eb-d68f-4208-86eb-a99f0a9ce5df\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808110 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kck6x\" (UniqueName: \"kubernetes.io/projected/c44d3483-738b-4aab-a4a2-1478480b6330-kube-api-access-kck6x\") pod \"cinder-operator-controller-manager-9b68f5989-vll8p\" (UID: \"c44d3483-738b-4aab-a4a2-1478480b6330\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808332 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwz7f\" (UniqueName: \"kubernetes.io/projected/e60d05a5-d1d5-4959-843b-654aaf547bca-kube-api-access-gwz7f\") pod \"heat-operator-controller-manager-594c8c9d5d-jxnlk\" (UID: \"e60d05a5-d1d5-4959-843b-654aaf547bca\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bftbm\" (UniqueName: \"kubernetes.io/projected/5eae4c51-3e86-4153-8c26-d4c51b2f1331-kube-api-access-bftbm\") pod \"glance-operator-controller-manager-c6994669c-m9grk\" (UID: \"5eae4c51-3e86-4153-8c26-d4c51b2f1331\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.808675 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28lm6\" (UniqueName: \"kubernetes.io/projected/070f7ba5-a528-4316-8484-4ea82fb70a40-kube-api-access-28lm6\") pod \"barbican-operator-controller-manager-7ddb5c749-jzl6b\" (UID: \"070f7ba5-a528-4316-8484-4ea82fb70a40\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.831175 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.840690 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw98x\" (UniqueName: \"kubernetes.io/projected/070a47eb-d68f-4208-86eb-a99f0a9ce5df-kube-api-access-bw98x\") pod \"designate-operator-controller-manager-9f958b845-vhsdx\" (UID: \"070a47eb-d68f-4208-86eb-a99f0a9ce5df\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.848265 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwz7f\" (UniqueName: \"kubernetes.io/projected/e60d05a5-d1d5-4959-843b-654aaf547bca-kube-api-access-gwz7f\") pod \"heat-operator-controller-manager-594c8c9d5d-jxnlk\" (UID: \"e60d05a5-d1d5-4959-843b-654aaf547bca\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.853679 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28lm6\" (UniqueName: \"kubernetes.io/projected/070f7ba5-a528-4316-8484-4ea82fb70a40-kube-api-access-28lm6\") pod \"barbican-operator-controller-manager-7ddb5c749-jzl6b\" (UID: \"070f7ba5-a528-4316-8484-4ea82fb70a40\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.855470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bftbm\" (UniqueName: \"kubernetes.io/projected/5eae4c51-3e86-4153-8c26-d4c51b2f1331-kube-api-access-bftbm\") pod \"glance-operator-controller-manager-c6994669c-m9grk\" (UID: \"5eae4c51-3e86-4153-8c26-d4c51b2f1331\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.866246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kck6x\" (UniqueName: \"kubernetes.io/projected/c44d3483-738b-4aab-a4a2-1478480b6330-kube-api-access-kck6x\") pod \"cinder-operator-controller-manager-9b68f5989-vll8p\" (UID: \"c44d3483-738b-4aab-a4a2-1478480b6330\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.868617 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.869405 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.872558 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2rc64" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.874552 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.875294 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.881174 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-494nf" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.890456 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.904826 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.908995 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.913889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrjs\" (UniqueName: \"kubernetes.io/projected/05642ba7-89bd-4d72-a31b-4e6d4532923e-kube-api-access-8rrjs\") pod \"horizon-operator-controller-manager-77d5c5b54f-5vwt4\" (UID: \"05642ba7-89bd-4d72-a31b-4e6d4532923e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.913924 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr5kj\" (UniqueName: \"kubernetes.io/projected/c07420af-b163-4ab6-8a1c-5e697629cab0-kube-api-access-xr5kj\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.913976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.914097 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5vf\" (UniqueName: \"kubernetes.io/projected/2dacfd0a-8e74-4eb1-b4cb-892ae16a9291-kube-api-access-bq5vf\") pod \"manila-operator-controller-manager-864f6b75bf-ft9st\" (UID: \"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.914139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vhn\" (UniqueName: \"kubernetes.io/projected/57182814-f19c-4247-b774-5b01afe7d680-kube-api-access-77vhn\") pod \"ironic-operator-controller-manager-78757b4889-bt9wn\" (UID: \"57182814-f19c-4247-b774-5b01afe7d680\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.914192 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjfvz\" (UniqueName: \"kubernetes.io/projected/d6706563-2c93-414e-bb49-cd74ae82d235-kube-api-access-kjfvz\") pod \"keystone-operator-controller-manager-767fdc4f47-th6cb\" (UID: \"d6706563-2c93-414e-bb49-cd74ae82d235\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.923387 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.927235 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.928241 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.930166 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2g2wl" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.935594 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.936030 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.936805 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.943031 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-28rt8" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.943231 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.943629 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.958003 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.975337 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.976690 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.979713 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pqgbt" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.981321 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.981686 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.982421 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.988177 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tpmfw" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.988381 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.990667 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.996414 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9"] Jan 20 15:03:55 crc kubenswrapper[4949]: I0120 15:03:55.997365 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.001708 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-p784k" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.004876 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.012205 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015462 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vhn\" (UniqueName: \"kubernetes.io/projected/57182814-f19c-4247-b774-5b01afe7d680-kube-api-access-77vhn\") pod \"ironic-operator-controller-manager-78757b4889-bt9wn\" (UID: \"57182814-f19c-4247-b774-5b01afe7d680\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015546 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282mc\" (UniqueName: \"kubernetes.io/projected/a87686a4-1af3-4d05-ac2d-15551c80e0d7-kube-api-access-282mc\") pod \"mariadb-operator-controller-manager-c87fff755-tj7jv\" (UID: \"a87686a4-1af3-4d05-ac2d-15551c80e0d7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015572 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjfvz\" (UniqueName: \"kubernetes.io/projected/d6706563-2c93-414e-bb49-cd74ae82d235-kube-api-access-kjfvz\") pod \"keystone-operator-controller-manager-767fdc4f47-th6cb\" (UID: \"d6706563-2c93-414e-bb49-cd74ae82d235\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015602 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrjs\" (UniqueName: \"kubernetes.io/projected/05642ba7-89bd-4d72-a31b-4e6d4532923e-kube-api-access-8rrjs\") pod \"horizon-operator-controller-manager-77d5c5b54f-5vwt4\" (UID: \"05642ba7-89bd-4d72-a31b-4e6d4532923e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015621 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b242h\" (UniqueName: \"kubernetes.io/projected/017942ba-9ec1-4474-91e5-7adb1481e807-kube-api-access-b242h\") pod \"neutron-operator-controller-manager-cb4666565-ljxrw\" (UID: \"017942ba-9ec1-4474-91e5-7adb1481e807\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015643 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr5kj\" (UniqueName: \"kubernetes.io/projected/c07420af-b163-4ab6-8a1c-5e697629cab0-kube-api-access-xr5kj\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015679 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015726 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k855k\" (UniqueName: \"kubernetes.io/projected/728be0e4-4dde-4f00-be4f-af6590d7025b-kube-api-access-k855k\") pod \"nova-operator-controller-manager-65849867d6-cc9zv\" (UID: \"728be0e4-4dde-4f00-be4f-af6590d7025b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.015753 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5vf\" (UniqueName: \"kubernetes.io/projected/2dacfd0a-8e74-4eb1-b4cb-892ae16a9291-kube-api-access-bq5vf\") pod \"manila-operator-controller-manager-864f6b75bf-ft9st\" (UID: \"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.015860 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.015932 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:56.515915253 +0000 UTC m=+832.325746111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.029485 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.030583 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.037200 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vhn\" (UniqueName: \"kubernetes.io/projected/57182814-f19c-4247-b774-5b01afe7d680-kube-api-access-77vhn\") pod \"ironic-operator-controller-manager-78757b4889-bt9wn\" (UID: \"57182814-f19c-4247-b774-5b01afe7d680\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.037295 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b7wzx" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.039938 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjfvz\" (UniqueName: \"kubernetes.io/projected/d6706563-2c93-414e-bb49-cd74ae82d235-kube-api-access-kjfvz\") pod \"keystone-operator-controller-manager-767fdc4f47-th6cb\" (UID: \"d6706563-2c93-414e-bb49-cd74ae82d235\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.045156 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrjs\" (UniqueName: \"kubernetes.io/projected/05642ba7-89bd-4d72-a31b-4e6d4532923e-kube-api-access-8rrjs\") pod \"horizon-operator-controller-manager-77d5c5b54f-5vwt4\" (UID: \"05642ba7-89bd-4d72-a31b-4e6d4532923e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.049389 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr5kj\" (UniqueName: \"kubernetes.io/projected/c07420af-b163-4ab6-8a1c-5e697629cab0-kube-api-access-xr5kj\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.052557 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5vf\" (UniqueName: \"kubernetes.io/projected/2dacfd0a-8e74-4eb1-b4cb-892ae16a9291-kube-api-access-bq5vf\") pod \"manila-operator-controller-manager-864f6b75bf-ft9st\" (UID: \"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.065459 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.071569 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.084328 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.085388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.090628 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hnhmq" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.100709 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118418 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bl4s\" (UniqueName: \"kubernetes.io/projected/d02df557-c289-4444-b29b-917ea271a874-kube-api-access-2bl4s\") pod \"octavia-operator-controller-manager-7fc9b76cf6-g87xm\" (UID: \"d02df557-c289-4444-b29b-917ea271a874\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118463 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k855k\" (UniqueName: \"kubernetes.io/projected/728be0e4-4dde-4f00-be4f-af6590d7025b-kube-api-access-k855k\") pod \"nova-operator-controller-manager-65849867d6-cc9zv\" (UID: \"728be0e4-4dde-4f00-be4f-af6590d7025b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118502 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e-kube-api-access-w7m7g\") pod \"ovn-operator-controller-manager-55db956ddc-f52ph\" (UID: \"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118581 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282mc\" (UniqueName: \"kubernetes.io/projected/a87686a4-1af3-4d05-ac2d-15551c80e0d7-kube-api-access-282mc\") pod \"mariadb-operator-controller-manager-c87fff755-tj7jv\" (UID: \"a87686a4-1af3-4d05-ac2d-15551c80e0d7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118606 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7vz\" (UniqueName: \"kubernetes.io/projected/58fdba15-e8ba-47fa-aca8-90f638577a6b-kube-api-access-2t7vz\") pod \"placement-operator-controller-manager-686df47fcb-4kwz9\" (UID: \"58fdba15-e8ba-47fa-aca8-90f638577a6b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118653 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b242h\" (UniqueName: \"kubernetes.io/projected/017942ba-9ec1-4474-91e5-7adb1481e807-kube-api-access-b242h\") pod \"neutron-operator-controller-manager-cb4666565-ljxrw\" (UID: \"017942ba-9ec1-4474-91e5-7adb1481e807\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mg6\" (UniqueName: \"kubernetes.io/projected/db4c21b1-de25-4c17-a3c3-e6eea4044d77-kube-api-access-w4mg6\") pod \"swift-operator-controller-manager-85dd56d4cc-nr2lr\" (UID: \"db4c21b1-de25-4c17-a3c3-e6eea4044d77\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.118896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9gv\" (UniqueName: \"kubernetes.io/projected/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-kube-api-access-zn9gv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.134267 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.157381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282mc\" (UniqueName: \"kubernetes.io/projected/a87686a4-1af3-4d05-ac2d-15551c80e0d7-kube-api-access-282mc\") pod \"mariadb-operator-controller-manager-c87fff755-tj7jv\" (UID: \"a87686a4-1af3-4d05-ac2d-15551c80e0d7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.157641 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k855k\" (UniqueName: \"kubernetes.io/projected/728be0e4-4dde-4f00-be4f-af6590d7025b-kube-api-access-k855k\") pod \"nova-operator-controller-manager-65849867d6-cc9zv\" (UID: \"728be0e4-4dde-4f00-be4f-af6590d7025b\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.157721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b242h\" (UniqueName: \"kubernetes.io/projected/017942ba-9ec1-4474-91e5-7adb1481e807-kube-api-access-b242h\") pod \"neutron-operator-controller-manager-cb4666565-ljxrw\" (UID: \"017942ba-9ec1-4474-91e5-7adb1481e807\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.178408 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-869947677f-8qg9p"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.179272 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.181740 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fdv8t" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.189497 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-869947677f-8qg9p"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.204366 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.220490 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e-kube-api-access-w7m7g\") pod \"ovn-operator-controller-manager-55db956ddc-f52ph\" (UID: \"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222249 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvt7h\" (UniqueName: \"kubernetes.io/projected/dc5c569e-c0ee-44bc-bdc9-397ab5941ad5-kube-api-access-gvt7h\") pod \"telemetry-operator-controller-manager-5f8f495fcf-94wzp\" (UID: \"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222280 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7vz\" (UniqueName: \"kubernetes.io/projected/58fdba15-e8ba-47fa-aca8-90f638577a6b-kube-api-access-2t7vz\") pod \"placement-operator-controller-manager-686df47fcb-4kwz9\" (UID: \"58fdba15-e8ba-47fa-aca8-90f638577a6b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mg6\" (UniqueName: \"kubernetes.io/projected/db4c21b1-de25-4c17-a3c3-e6eea4044d77-kube-api-access-w4mg6\") pod \"swift-operator-controller-manager-85dd56d4cc-nr2lr\" (UID: \"db4c21b1-de25-4c17-a3c3-e6eea4044d77\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9gv\" (UniqueName: \"kubernetes.io/projected/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-kube-api-access-zn9gv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.222414 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bl4s\" (UniqueName: \"kubernetes.io/projected/d02df557-c289-4444-b29b-917ea271a874-kube-api-access-2bl4s\") pod \"octavia-operator-controller-manager-7fc9b76cf6-g87xm\" (UID: \"d02df557-c289-4444-b29b-917ea271a874\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.223475 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.223532 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:03:56.723501438 +0000 UTC m=+832.533332296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.223990 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.244137 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.245643 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.258732 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-25j8l" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.259446 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.260324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9gv\" (UniqueName: \"kubernetes.io/projected/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-kube-api-access-zn9gv\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.277422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mg6\" (UniqueName: \"kubernetes.io/projected/db4c21b1-de25-4c17-a3c3-e6eea4044d77-kube-api-access-w4mg6\") pod \"swift-operator-controller-manager-85dd56d4cc-nr2lr\" (UID: \"db4c21b1-de25-4c17-a3c3-e6eea4044d77\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.277526 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7m7g\" (UniqueName: \"kubernetes.io/projected/ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e-kube-api-access-w7m7g\") pod \"ovn-operator-controller-manager-55db956ddc-f52ph\" (UID: \"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.278163 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bl4s\" (UniqueName: \"kubernetes.io/projected/d02df557-c289-4444-b29b-917ea271a874-kube-api-access-2bl4s\") pod \"octavia-operator-controller-manager-7fc9b76cf6-g87xm\" (UID: \"d02df557-c289-4444-b29b-917ea271a874\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.281715 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7vz\" (UniqueName: \"kubernetes.io/projected/58fdba15-e8ba-47fa-aca8-90f638577a6b-kube-api-access-2t7vz\") pod \"placement-operator-controller-manager-686df47fcb-4kwz9\" (UID: \"58fdba15-e8ba-47fa-aca8-90f638577a6b\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.301602 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.328065 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztcl\" (UniqueName: \"kubernetes.io/projected/68de7d27-2202-473a-b077-d03d033244a2-kube-api-access-kztcl\") pod \"watcher-operator-controller-manager-64cd966744-jc5mh\" (UID: \"68de7d27-2202-473a-b077-d03d033244a2\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.328186 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzh7d\" (UniqueName: \"kubernetes.io/projected/63acb80f-21b4-4255-af60-03a68dd07658-kube-api-access-rzh7d\") pod \"test-operator-controller-manager-869947677f-8qg9p\" (UID: \"63acb80f-21b4-4255-af60-03a68dd07658\") " pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.328286 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvt7h\" (UniqueName: \"kubernetes.io/projected/dc5c569e-c0ee-44bc-bdc9-397ab5941ad5-kube-api-access-gvt7h\") pod \"telemetry-operator-controller-manager-5f8f495fcf-94wzp\" (UID: \"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.348271 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.348333 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.357414 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.363927 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.366869 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nhvd5" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.366889 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.367063 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.374763 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.375065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvt7h\" (UniqueName: \"kubernetes.io/projected/dc5c569e-c0ee-44bc-bdc9-397ab5941ad5-kube-api-access-gvt7h\") pod \"telemetry-operator-controller-manager-5f8f495fcf-94wzp\" (UID: \"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.378067 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.394053 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.403242 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.404343 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.412076 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nxhjp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.420732 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.429214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kztcl\" (UniqueName: \"kubernetes.io/projected/68de7d27-2202-473a-b077-d03d033244a2-kube-api-access-kztcl\") pod \"watcher-operator-controller-manager-64cd966744-jc5mh\" (UID: \"68de7d27-2202-473a-b077-d03d033244a2\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.429270 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzh7d\" (UniqueName: \"kubernetes.io/projected/63acb80f-21b4-4255-af60-03a68dd07658-kube-api-access-rzh7d\") pod \"test-operator-controller-manager-869947677f-8qg9p\" (UID: \"63acb80f-21b4-4255-af60-03a68dd07658\") " pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.453597 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kztcl\" (UniqueName: \"kubernetes.io/projected/68de7d27-2202-473a-b077-d03d033244a2-kube-api-access-kztcl\") pod \"watcher-operator-controller-manager-64cd966744-jc5mh\" (UID: \"68de7d27-2202-473a-b077-d03d033244a2\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.460225 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzh7d\" (UniqueName: \"kubernetes.io/projected/63acb80f-21b4-4255-af60-03a68dd07658-kube-api-access-rzh7d\") pod \"test-operator-controller-manager-869947677f-8qg9p\" (UID: \"63acb80f-21b4-4255-af60-03a68dd07658\") " pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.504824 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.524307 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjv9c\" (UniqueName: \"kubernetes.io/projected/ec1b1a5b-0d86-40b4-9410-397d183776d0-kube-api-access-cjv9c\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531437 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531563 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.531632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jz8\" (UniqueName: \"kubernetes.io/projected/d770793b-0e56-43cc-9707-5d062b8f7c82-kube-api-access-r7jz8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pzpkv\" (UID: \"d770793b-0e56-43cc-9707-5d062b8f7c82\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.531794 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.531837 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.531822059 +0000 UTC m=+833.341652917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.576824 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.583122 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633285 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jz8\" (UniqueName: \"kubernetes.io/projected/d770793b-0e56-43cc-9707-5d062b8f7c82-kube-api-access-r7jz8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pzpkv\" (UID: \"d770793b-0e56-43cc-9707-5d062b8f7c82\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633343 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjv9c\" (UniqueName: \"kubernetes.io/projected/ec1b1a5b-0d86-40b4-9410-397d183776d0-kube-api-access-cjv9c\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633381 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.633445 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.633728 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.633797 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.133780922 +0000 UTC m=+832.943611780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.634085 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.634118 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.134109992 +0000 UTC m=+832.943940850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.634281 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.666767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jz8\" (UniqueName: \"kubernetes.io/projected/d770793b-0e56-43cc-9707-5d062b8f7c82-kube-api-access-r7jz8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-pzpkv\" (UID: \"d770793b-0e56-43cc-9707-5d062b8f7c82\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.667767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjv9c\" (UniqueName: \"kubernetes.io/projected/ec1b1a5b-0d86-40b4-9410-397d183776d0-kube-api-access-cjv9c\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.684899 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.732721 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.735596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.735734 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: E0120 15:03:56.735780 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:03:57.735767905 +0000 UTC m=+833.545598763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.743788 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.961638 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.979438 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-m9grk"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.985052 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk"] Jan 20 15:03:56 crc kubenswrapper[4949]: I0120 15:03:56.990190 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.064745 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eae4c51_3e86_4153_8c26_d4c51b2f1331.slice/crio-46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f WatchSource:0}: Error finding container 46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f: Status 404 returned error can't find the container with id 46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.141752 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.141825 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141894 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141940 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141948 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:58.141935443 +0000 UTC m=+833.951766301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.141966 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:58.141958643 +0000 UTC m=+833.951789501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.390948 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.397717 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728be0e4_4dde_4f00_be4f_af6590d7025b.slice/crio-aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90 WatchSource:0}: Error finding container aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90: Status 404 returned error can't find the container with id aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90 Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.422711 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.457651 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.463542 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.469068 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.486989 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc5c569e_c0ee_44bc_bdc9_397ab5941ad5.slice/crio-476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6 WatchSource:0}: Error finding container 476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6: Status 404 returned error can't find the container with id 476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6 Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.507508 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.516253 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod017942ba_9ec1_4474_91e5_7adb1481e807.slice/crio-09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18 WatchSource:0}: Error finding container 09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18: Status 404 returned error can't find the container with id 09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18 Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.516618 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw"] Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.517771 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05642ba7_89bd_4d72_a31b_4e6d4532923e.slice/crio-05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab WatchSource:0}: Error finding container 05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab: Status 404 returned error can't find the container with id 05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.520740 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb4c21b1_de25_4c17_a3c3_e6eea4044d77.slice/crio-33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886 WatchSource:0}: Error finding container 33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886: Status 404 returned error can't find the container with id 33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886 Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.520854 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b242h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-ljxrw_openstack-operators(017942ba-9ec1-4474-91e5-7adb1481e807): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.520879 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8rrjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-5vwt4_openstack-operators(05642ba7-89bd-4d72-a31b-4e6d4532923e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.522368 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podUID="05642ba7-89bd-4d72-a31b-4e6d4532923e" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.522382 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podUID="017942ba-9ec1-4474-91e5-7adb1481e807" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.523120 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9"] Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.523790 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4mg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-nr2lr_openstack-operators(db4c21b1-de25-4c17-a3c3-e6eea4044d77): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: W0120 15:03:57.524297 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd02df557_c289_4444_b29b_917ea271a874.slice/crio-44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81 WatchSource:0}: Error finding container 44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81: Status 404 returned error can't find the container with id 44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81 Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.525392 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podUID="db4c21b1-de25-4c17-a3c3-e6eea4044d77" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.528954 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2bl4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-g87xm_openstack-operators(d02df557-c289-4444-b29b-917ea271a874): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.530460 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podUID="d02df557-c289-4444-b29b-917ea271a874" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.531180 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.538192 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.543096 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.549971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.550176 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.550332 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:03:59.550305879 +0000 UTC m=+835.360136747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.614769 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-869947677f-8qg9p"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.670045 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.701985 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh"] Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.716354 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" event={"ID":"d02df557-c289-4444-b29b-917ea271a874","Type":"ContainerStarted","Data":"44bd3ec50709b7752f5ac6cb6922767de43405445058a8a8073b2c686daeee81"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.718600 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podUID="d02df557-c289-4444-b29b-917ea271a874" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.725536 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" event={"ID":"63acb80f-21b4-4255-af60-03a68dd07658","Type":"ContainerStarted","Data":"9e65ac1944fe94635f159bf2febf4c281e815f5269823a8f05daf26da9bbac39"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.727435 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" event={"ID":"e60d05a5-d1d5-4959-843b-654aaf547bca","Type":"ContainerStarted","Data":"9b4669810b421e585ac06697c3022766dfcc043f88a798897388d1d171da0c10"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.728771 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" event={"ID":"57182814-f19c-4247-b774-5b01afe7d680","Type":"ContainerStarted","Data":"4dd748d6572dde0fb6e24953afa69276410f307c907dd4c26af6469aa3a34832"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.730041 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" event={"ID":"070a47eb-d68f-4208-86eb-a99f0a9ce5df","Type":"ContainerStarted","Data":"5707d64ee29610801066ac15d60d7045e36913d88dee4078d7258fbea6c5dd34"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.731216 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" event={"ID":"070f7ba5-a528-4316-8484-4ea82fb70a40","Type":"ContainerStarted","Data":"97d49aef181968404d37cb2582664f3a9e7ac3d880fa6f9edea6fc5ada1d1cb5"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.732212 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" event={"ID":"05642ba7-89bd-4d72-a31b-4e6d4532923e","Type":"ContainerStarted","Data":"05b13f49bb99afa9efd9cc086918184520732c11da7263624fe03422216ebfab"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.735792 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podUID="05642ba7-89bd-4d72-a31b-4e6d4532923e" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.736546 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" event={"ID":"017942ba-9ec1-4474-91e5-7adb1481e807","Type":"ContainerStarted","Data":"09c66d9fe557c52bbc8e7521350402aec4ca7cb8b0bd46fb02db81846c958a18"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.737996 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podUID="017942ba-9ec1-4474-91e5-7adb1481e807" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.740925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" event={"ID":"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5","Type":"ContainerStarted","Data":"476923e68e8b0a41864b97b4efb5f42413b916a35e0e1a72dd2643211e63fee6"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.754874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.755114 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.755326 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:03:59.75516008 +0000 UTC m=+835.564990938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.760443 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" event={"ID":"a87686a4-1af3-4d05-ac2d-15551c80e0d7","Type":"ContainerStarted","Data":"23d7bb8e58edaa62e2366a1088bb6a3cce6d21028920d0e40545f79fe1ae7e32"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.764173 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" event={"ID":"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e","Type":"ContainerStarted","Data":"37ca5b59d799aba0e1b4d07925d4557276bacefaa7d6478093be9651e9d97cd5"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.770084 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" event={"ID":"5eae4c51-3e86-4153-8c26-d4c51b2f1331","Type":"ContainerStarted","Data":"46485eb8ed1e82cdf669203d0d85ae61f25defe2b013fbed92f9060452ff5f8f"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.780070 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" event={"ID":"db4c21b1-de25-4c17-a3c3-e6eea4044d77","Type":"ContainerStarted","Data":"33bb291bac2ff82cf894a60b779df89640662d58373d2ebc79230debd4bdd886"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.780307 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podUID="db4c21b1-de25-4c17-a3c3-e6eea4044d77" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.781488 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" event={"ID":"728be0e4-4dde-4f00-be4f-af6590d7025b","Type":"ContainerStarted","Data":"aa7fc4f598e86a8c2d98c19d0120b610b8fe2c52a1419d446ee61399bc4a7c90"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.788412 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" event={"ID":"c44d3483-738b-4aab-a4a2-1478480b6330","Type":"ContainerStarted","Data":"9b0764e089e99b2c400628d4b91cacb3b39158ceb9fd02a4ac2bade391443316"} Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.790617 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" event={"ID":"d6706563-2c93-414e-bb49-cd74ae82d235","Type":"ContainerStarted","Data":"1e67cd1ebfaaf483fcbeac9a8459e54e81bddecce452c8600a13a34bf8ff3332"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.792408 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kztcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-jc5mh_openstack-operators(68de7d27-2202-473a-b077-d03d033244a2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.793053 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" event={"ID":"58fdba15-e8ba-47fa-aca8-90f638577a6b","Type":"ContainerStarted","Data":"7fa8dc4825e204959c44029062fb61b9a506ef29f8d33244fecbdc09198430c4"} Jan 20 15:03:57 crc kubenswrapper[4949]: E0120 15:03:57.793636 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podUID="68de7d27-2202-473a-b077-d03d033244a2" Jan 20 15:03:57 crc kubenswrapper[4949]: I0120 15:03:57.799497 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" event={"ID":"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291","Type":"ContainerStarted","Data":"992785306f5d335f8d07ec696164ebff4db246d8265fadee32a9491c45deff3b"} Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.160534 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.160934 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.160806 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.161066 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:00.1610505 +0000 UTC m=+835.970881358 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.161011 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.161099 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:00.161093892 +0000 UTC m=+835.970924750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.813959 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" event={"ID":"d770793b-0e56-43cc-9707-5d062b8f7c82","Type":"ContainerStarted","Data":"380e4d2d80e12ecbd6bf1f07861e6080f0d351efb3a269cd7fc38ddb58e7d051"} Jan 20 15:03:58 crc kubenswrapper[4949]: I0120 15:03:58.818047 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" event={"ID":"68de7d27-2202-473a-b077-d03d033244a2","Type":"ContainerStarted","Data":"188418b2a14d5e54ec3a7231371c638caa209beee7d6abdb67fb8cc371919648"} Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.820714 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podUID="db4c21b1-de25-4c17-a3c3-e6eea4044d77" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.821225 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podUID="68de7d27-2202-473a-b077-d03d033244a2" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.821351 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podUID="017942ba-9ec1-4474-91e5-7adb1481e807" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.821911 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podUID="d02df557-c289-4444-b29b-917ea271a874" Jan 20 15:03:58 crc kubenswrapper[4949]: E0120 15:03:58.822502 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podUID="05642ba7-89bd-4d72-a31b-4e6d4532923e" Jan 20 15:03:59 crc kubenswrapper[4949]: I0120 15:03:59.583028 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.583219 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.583299 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:03.583280894 +0000 UTC m=+839.393111742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: I0120 15:03:59.787056 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.787480 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.787626 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:04:03.787611091 +0000 UTC m=+839.597441949 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:03:59 crc kubenswrapper[4949]: E0120 15:03:59.834156 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podUID="68de7d27-2202-473a-b077-d03d033244a2" Jan 20 15:04:00 crc kubenswrapper[4949]: I0120 15:04:00.193869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:00 crc kubenswrapper[4949]: I0120 15:04:00.193953 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194070 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194070 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194127 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:04.194110349 +0000 UTC m=+840.003941207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:04:00 crc kubenswrapper[4949]: E0120 15:04:00.194309 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:04.194268934 +0000 UTC m=+840.004099852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: I0120 15:04:03.678216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.678622 4949 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.678670 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert podName:c07420af-b163-4ab6-8a1c-5e697629cab0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:11.67865614 +0000 UTC m=+847.488486998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert") pod "infra-operator-controller-manager-77c48c7859-q5h89" (UID: "c07420af-b163-4ab6-8a1c-5e697629cab0") : secret "infra-operator-webhook-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: I0120 15:04:03.881206 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.881410 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:04:03 crc kubenswrapper[4949]: E0120 15:04:03.882915 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert podName:0e576db6-d246-4a03-a2bd-8cbd7f7526fd nodeName:}" failed. No retries permitted until 2026-01-20 15:04:11.882893534 +0000 UTC m=+847.692724472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" (UID: "0e576db6-d246-4a03-a2bd-8cbd7f7526fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: I0120 15:04:04.287663 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:04 crc kubenswrapper[4949]: I0120 15:04:04.287771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.287820 4949 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.287914 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:12.287893956 +0000 UTC m=+848.097724814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "metrics-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.288066 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:04:04 crc kubenswrapper[4949]: E0120 15:04:04.288198 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:12.288151314 +0000 UTC m=+848.097982232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.712643 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.713402 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7m7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-f52ph_openstack-operators(ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.714570 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" podUID="ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e" Jan 20 15:04:10 crc kubenswrapper[4949]: E0120 15:04:10.900651 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" podUID="ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.278794 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.279002 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-282mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-tj7jv_openstack-operators(a87686a4-1af3-4d05-ac2d-15551c80e0d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.280151 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" podUID="a87686a4-1af3-4d05-ac2d-15551c80e0d7" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.765524 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.774587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c07420af-b163-4ab6-8a1c-5e697629cab0-cert\") pod \"infra-operator-controller-manager-77c48c7859-q5h89\" (UID: \"c07420af-b163-4ab6-8a1c-5e697629cab0\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:11 crc kubenswrapper[4949]: E0120 15:04:11.906501 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" podUID="a87686a4-1af3-4d05-ac2d-15551c80e0d7" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.967655 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-cl6dx" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.967882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.975130 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e576db6-d246-4a03-a2bd-8cbd7f7526fd-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg\" (UID: \"0e576db6-d246-4a03-a2bd-8cbd7f7526fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:11 crc kubenswrapper[4949]: I0120 15:04:11.976695 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.025171 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tpmfw" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.034002 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.311375 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.312581 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjfvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-th6cb_openstack-operators(d6706563-2c93-414e-bb49-cd74ae82d235): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.313755 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" podUID="d6706563-2c93-414e-bb49-cd74ae82d235" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.374402 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.374486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.374721 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.374780 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs podName:ec1b1a5b-0d86-40b4-9410-397d183776d0 nodeName:}" failed. No retries permitted until 2026-01-20 15:04:28.374761124 +0000 UTC m=+864.184591982 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs") pod "openstack-operator-controller-manager-559d8b8b56-srtdv" (UID: "ec1b1a5b-0d86-40b4-9410-397d183776d0") : secret "webhook-server-cert" not found Jan 20 15:04:12 crc kubenswrapper[4949]: I0120 15:04:12.383476 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-metrics-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.921064 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" podUID="d6706563-2c93-414e-bb49-cd74ae82d235" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.931892 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.932047 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bq5vf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-ft9st_openstack-operators(2dacfd0a-8e74-4eb1-b4cb-892ae16a9291): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:12 crc kubenswrapper[4949]: E0120 15:04:12.933356 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" podUID="2dacfd0a-8e74-4eb1-b4cb-892ae16a9291" Jan 20 15:04:13 crc kubenswrapper[4949]: E0120 15:04:13.920762 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" podUID="2dacfd0a-8e74-4eb1-b4cb-892ae16a9291" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.053635 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.054082 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bw98x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-vhsdx_openstack-operators(070a47eb-d68f-4208-86eb-a99f0a9ce5df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.055789 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" podUID="070a47eb-d68f-4208-86eb-a99f0a9ce5df" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.355772 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.355819 4949 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.355938 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rzh7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-869947677f-8qg9p_openstack-operators(63acb80f-21b4-4255-af60-03a68dd07658): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.357113 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" podUID="63acb80f-21b4-4255-af60-03a68dd07658" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.834976 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.835159 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k855k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-cc9zv_openstack-operators(728be0e4-4dde-4f00-be4f-af6590d7025b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.836358 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" podUID="728be0e4-4dde-4f00-be4f-af6590d7025b" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.940989 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" podUID="728be0e4-4dde-4f00-be4f-af6590d7025b" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.940992 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.145:5001/openstack-k8s-operators/test-operator:d13a5aac38c8137de82b9d4aecf30e64d0d93ea1\\\"\"" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" podUID="63acb80f-21b4-4255-af60-03a68dd07658" Jan 20 15:04:16 crc kubenswrapper[4949]: E0120 15:04:16.941808 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" podUID="070a47eb-d68f-4208-86eb-a99f0a9ce5df" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.272155 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.272438 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7jz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-pzpkv_openstack-operators(d770793b-0e56-43cc-9707-5d062b8f7c82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.274666 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" podUID="d770793b-0e56-43cc-9707-5d062b8f7c82" Jan 20 15:04:17 crc kubenswrapper[4949]: E0120 15:04:17.945742 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" podUID="d770793b-0e56-43cc-9707-5d062b8f7c82" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.470782 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89"] Jan 20 15:04:20 crc kubenswrapper[4949]: W0120 15:04:20.486669 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07420af_b163_4ab6_8a1c_5e697629cab0.slice/crio-d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53 WatchSource:0}: Error finding container d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53: Status 404 returned error can't find the container with id d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53 Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.569381 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg"] Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.964815 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" event={"ID":"017942ba-9ec1-4474-91e5-7adb1481e807","Type":"ContainerStarted","Data":"069c00eb50e502ee5495e88a7b24e53b82536c38188a4a0f3c174858aa4e33f1"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.966028 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.967810 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" event={"ID":"d02df557-c289-4444-b29b-917ea271a874","Type":"ContainerStarted","Data":"61956e59672aa848f27a04dda56a67c5f78cf361894360cb65adbba970b4bc34"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.968042 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.969430 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" event={"ID":"68de7d27-2202-473a-b077-d03d033244a2","Type":"ContainerStarted","Data":"8fd574f89497a788420b690c83e718a2fd8b3793679a86900e4a2a9eeaa49435"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.969625 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.970973 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" event={"ID":"57182814-f19c-4247-b774-5b01afe7d680","Type":"ContainerStarted","Data":"6ab54ca6e475b08df289e26cd6bb18b8d4039cf9567565d0273ff17ccc100778"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.971081 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.972228 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" event={"ID":"dc5c569e-c0ee-44bc-bdc9-397ab5941ad5","Type":"ContainerStarted","Data":"c4d1cddb28f278d7d9afc26391833d5302f4ddc46482cee45de37e827002e451"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.972271 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.973993 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" event={"ID":"58fdba15-e8ba-47fa-aca8-90f638577a6b","Type":"ContainerStarted","Data":"2fd38171c0d1926aad9ba6835a1fb5aaa6aa3b269a83f6f8e210414e82169370"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.974039 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.975434 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" event={"ID":"db4c21b1-de25-4c17-a3c3-e6eea4044d77","Type":"ContainerStarted","Data":"c6ad7b28f7d0ac80d1d9f420bca0bae6112149c0b9e189ac28138b532b79c418"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.975556 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.976927 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" event={"ID":"e60d05a5-d1d5-4959-843b-654aaf547bca","Type":"ContainerStarted","Data":"e636c3bd4493f5cc371b6e3ae4b39a12b6ae05121102784779192c0b8eda170f"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.977067 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.978686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" event={"ID":"5eae4c51-3e86-4153-8c26-d4c51b2f1331","Type":"ContainerStarted","Data":"e10bb30f584ddb06bce9f710da34bc256dcb85f408e23e8cb4b504ab886a817a"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.979112 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.980603 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" event={"ID":"070f7ba5-a528-4316-8484-4ea82fb70a40","Type":"ContainerStarted","Data":"445e499399c9c0f74c879bbd7cd9a7e89244e3750e34a5c49e1fd0a422fbcf23"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.981004 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.981853 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" event={"ID":"c07420af-b163-4ab6-8a1c-5e697629cab0","Type":"ContainerStarted","Data":"d83d2de473c6e62a6746b1fb409635a375cad2f96da70d20806c151c8c4b6e53"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.982656 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" event={"ID":"0e576db6-d246-4a03-a2bd-8cbd7f7526fd","Type":"ContainerStarted","Data":"90a4940e8e1a9d0a8c1ebd564d6cc201e895e5384face97b0506bbbe363f1a35"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.983769 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" event={"ID":"c44d3483-738b-4aab-a4a2-1478480b6330","Type":"ContainerStarted","Data":"7309b185c876e9809be7940af27c030bba7877ce99cb0a77ea2e07b328b78420"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.984105 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.985340 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" event={"ID":"05642ba7-89bd-4d72-a31b-4e6d4532923e","Type":"ContainerStarted","Data":"93a85a022c8d5e0d4a167e133fad10b4cf307fc6f29b32c91320dd3e4f8ab301"} Jan 20 15:04:20 crc kubenswrapper[4949]: I0120 15:04:20.985690 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.119764 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" podStartSLOduration=6.835652226 podStartE2EDuration="26.119748714s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:56.763488836 +0000 UTC m=+832.573319694" lastFinishedPulling="2026-01-20 15:04:16.047585304 +0000 UTC m=+851.857416182" observedRunningTime="2026-01-20 15:04:21.106820501 +0000 UTC m=+856.916651369" watchObservedRunningTime="2026-01-20 15:04:21.119748714 +0000 UTC m=+856.929579572" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.121979 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" podStartSLOduration=3.510017165 podStartE2EDuration="26.121972561s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.520714331 +0000 UTC m=+833.330545189" lastFinishedPulling="2026-01-20 15:04:20.132669727 +0000 UTC m=+855.942500585" observedRunningTime="2026-01-20 15:04:21.013653277 +0000 UTC m=+856.823484135" watchObservedRunningTime="2026-01-20 15:04:21.121972561 +0000 UTC m=+856.931803419" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.247735 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" podStartSLOduration=6.953341745 podStartE2EDuration="26.247717595s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:56.753312657 +0000 UTC m=+832.563143515" lastFinishedPulling="2026-01-20 15:04:16.047688457 +0000 UTC m=+851.857519365" observedRunningTime="2026-01-20 15:04:21.191093287 +0000 UTC m=+857.000924145" watchObservedRunningTime="2026-01-20 15:04:21.247717595 +0000 UTC m=+857.057548443" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.307800 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" podStartSLOduration=3.6791616129999998 podStartE2EDuration="26.307784726s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.5207093 +0000 UTC m=+833.330540158" lastFinishedPulling="2026-01-20 15:04:20.149332413 +0000 UTC m=+855.959163271" observedRunningTime="2026-01-20 15:04:21.255644705 +0000 UTC m=+857.065475563" watchObservedRunningTime="2026-01-20 15:04:21.307784726 +0000 UTC m=+857.117615584" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.364393 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" podStartSLOduration=7.391482671 podStartE2EDuration="26.364372922s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.074715114 +0000 UTC m=+832.884545972" lastFinishedPulling="2026-01-20 15:04:16.047605325 +0000 UTC m=+851.857436223" observedRunningTime="2026-01-20 15:04:21.363482366 +0000 UTC m=+857.173313234" watchObservedRunningTime="2026-01-20 15:04:21.364372922 +0000 UTC m=+857.174203790" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.369211 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" podStartSLOduration=7.126922508 podStartE2EDuration="26.369188779s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.060838773 +0000 UTC m=+832.870669631" lastFinishedPulling="2026-01-20 15:04:16.303105004 +0000 UTC m=+852.112935902" observedRunningTime="2026-01-20 15:04:21.311094106 +0000 UTC m=+857.120924964" watchObservedRunningTime="2026-01-20 15:04:21.369188779 +0000 UTC m=+857.179019637" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.431536 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" podStartSLOduration=6.699414802 podStartE2EDuration="26.431498298s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.518660088 +0000 UTC m=+833.328490946" lastFinishedPulling="2026-01-20 15:04:17.250743584 +0000 UTC m=+853.060574442" observedRunningTime="2026-01-20 15:04:21.429953062 +0000 UTC m=+857.239783930" watchObservedRunningTime="2026-01-20 15:04:21.431498298 +0000 UTC m=+857.241329156" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.479680 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" podStartSLOduration=5.032997034 podStartE2EDuration="26.479664269s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.506963194 +0000 UTC m=+833.316794062" lastFinishedPulling="2026-01-20 15:04:18.953630439 +0000 UTC m=+854.763461297" observedRunningTime="2026-01-20 15:04:21.47145572 +0000 UTC m=+857.281286578" watchObservedRunningTime="2026-01-20 15:04:21.479664269 +0000 UTC m=+857.289495127" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.518563 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" podStartSLOduration=3.9645827000000002 podStartE2EDuration="26.518545988s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.528252259 +0000 UTC m=+833.338083117" lastFinishedPulling="2026-01-20 15:04:20.082215547 +0000 UTC m=+855.892046405" observedRunningTime="2026-01-20 15:04:21.518206068 +0000 UTC m=+857.328036926" watchObservedRunningTime="2026-01-20 15:04:21.518545988 +0000 UTC m=+857.328376846" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.553734 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" podStartSLOduration=3.213163029 podStartE2EDuration="25.553709535s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.792171723 +0000 UTC m=+833.602002581" lastFinishedPulling="2026-01-20 15:04:20.132718229 +0000 UTC m=+855.942549087" observedRunningTime="2026-01-20 15:04:21.552785557 +0000 UTC m=+857.362616415" watchObservedRunningTime="2026-01-20 15:04:21.553709535 +0000 UTC m=+857.363540393" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.642285 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" podStartSLOduration=6.892898452 podStartE2EDuration="26.6422612s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.074083276 +0000 UTC m=+832.883914144" lastFinishedPulling="2026-01-20 15:04:16.823446034 +0000 UTC m=+852.633276892" observedRunningTime="2026-01-20 15:04:21.590072748 +0000 UTC m=+857.399903606" watchObservedRunningTime="2026-01-20 15:04:21.6422612 +0000 UTC m=+857.452092058" Jan 20 15:04:21 crc kubenswrapper[4949]: I0120 15:04:21.659855 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" podStartSLOduration=4.056169998 podStartE2EDuration="26.659835813s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.523690861 +0000 UTC m=+833.333521719" lastFinishedPulling="2026-01-20 15:04:20.127356686 +0000 UTC m=+855.937187534" observedRunningTime="2026-01-20 15:04:21.624005606 +0000 UTC m=+857.433836464" watchObservedRunningTime="2026-01-20 15:04:21.659835813 +0000 UTC m=+857.469666671" Jan 20 15:04:23 crc kubenswrapper[4949]: I0120 15:04:23.001550 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" event={"ID":"ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e","Type":"ContainerStarted","Data":"d45ab7ebb9c179b5b8a5cd81e560f446a20fe915bb946db40915f72e64458018"} Jan 20 15:04:23 crc kubenswrapper[4949]: I0120 15:04:23.002122 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:04:23 crc kubenswrapper[4949]: I0120 15:04:23.021254 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" podStartSLOduration=3.276943016 podStartE2EDuration="28.021234982s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.512413649 +0000 UTC m=+833.322244507" lastFinishedPulling="2026-01-20 15:04:22.256705615 +0000 UTC m=+858.066536473" observedRunningTime="2026-01-20 15:04:23.021147239 +0000 UTC m=+858.830978097" watchObservedRunningTime="2026-01-20 15:04:23.021234982 +0000 UTC m=+858.831065840" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.018357 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" event={"ID":"c07420af-b163-4ab6-8a1c-5e697629cab0","Type":"ContainerStarted","Data":"ae0a5761819ed76fb8b7ff7b6c18bad2c628a5d523bbd3647416af6f387c45f0"} Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.019045 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.019658 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" event={"ID":"0e576db6-d246-4a03-a2bd-8cbd7f7526fd","Type":"ContainerStarted","Data":"17a4498a9b4b01078dd9202d605c1d0c7824002468986371dc999bf118974b72"} Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.020134 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.021480 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" event={"ID":"a87686a4-1af3-4d05-ac2d-15551c80e0d7","Type":"ContainerStarted","Data":"f516cce3a7a0c46b062f69e9ed53470d46cd98d0789f629b710d90511f559c02"} Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.022314 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.050643 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" podStartSLOduration=26.524288613 podStartE2EDuration="30.050613979s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:04:20.488494119 +0000 UTC m=+856.298324987" lastFinishedPulling="2026-01-20 15:04:24.014819495 +0000 UTC m=+859.824650353" observedRunningTime="2026-01-20 15:04:25.049804225 +0000 UTC m=+860.859635083" watchObservedRunningTime="2026-01-20 15:04:25.050613979 +0000 UTC m=+860.860444857" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.078953 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" podStartSLOduration=26.626493022 podStartE2EDuration="30.078930717s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:04:20.582726807 +0000 UTC m=+856.392557665" lastFinishedPulling="2026-01-20 15:04:24.035164502 +0000 UTC m=+859.844995360" observedRunningTime="2026-01-20 15:04:25.076222515 +0000 UTC m=+860.886053373" watchObservedRunningTime="2026-01-20 15:04:25.078930717 +0000 UTC m=+860.888761585" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.092430 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" podStartSLOduration=2.814153241 podStartE2EDuration="30.092412077s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.461750062 +0000 UTC m=+833.271580920" lastFinishedPulling="2026-01-20 15:04:24.740008898 +0000 UTC m=+860.549839756" observedRunningTime="2026-01-20 15:04:25.088175038 +0000 UTC m=+860.898005896" watchObservedRunningTime="2026-01-20 15:04:25.092412077 +0000 UTC m=+860.902242935" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.907146 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-jzl6b" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.927027 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-vll8p" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.946770 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-m9grk" Jan 20 15:04:25 crc kubenswrapper[4949]: I0120 15:04:25.984732 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-jxnlk" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.029113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" event={"ID":"d6706563-2c93-414e-bb49-cd74ae82d235","Type":"ContainerStarted","Data":"f01a1f74db045341f4d6df0e5d64a628d6d983970a7fab30d0b85632ec3cc6aa"} Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.030647 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.046574 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" podStartSLOduration=3.106384233 podStartE2EDuration="31.046558484s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.400723762 +0000 UTC m=+833.210554620" lastFinishedPulling="2026-01-20 15:04:25.340898013 +0000 UTC m=+861.150728871" observedRunningTime="2026-01-20 15:04:26.044163392 +0000 UTC m=+861.853994250" watchObservedRunningTime="2026-01-20 15:04:26.046558484 +0000 UTC m=+861.856389342" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.076617 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-bt9wn" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.248168 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ljxrw" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.350742 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5vwt4" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.382693 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-g87xm" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.508970 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-4kwz9" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.528085 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-nr2lr" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.585996 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-94wzp" Jan 20 15:04:26 crc kubenswrapper[4949]: I0120 15:04:26.636455 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-jc5mh" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.469641 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.477844 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec1b1a5b-0d86-40b4-9410-397d183776d0-webhook-certs\") pod \"openstack-operator-controller-manager-559d8b8b56-srtdv\" (UID: \"ec1b1a5b-0d86-40b4-9410-397d183776d0\") " pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.536701 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-nhvd5" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.548724 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:28 crc kubenswrapper[4949]: I0120 15:04:28.862969 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv"] Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.050094 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" event={"ID":"2dacfd0a-8e74-4eb1-b4cb-892ae16a9291","Type":"ContainerStarted","Data":"dc93a4086d275f66b2d3cdf51e900f1fa48c7d8db1f42c04d3a965b1b23ec0ce"} Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.050650 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.051238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" event={"ID":"ec1b1a5b-0d86-40b4-9410-397d183776d0","Type":"ContainerStarted","Data":"79441e48caab8cd33138b2dd3b4e69555fca5b12fcb07c4d797859b670121d98"} Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.051266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" event={"ID":"ec1b1a5b-0d86-40b4-9410-397d183776d0","Type":"ContainerStarted","Data":"d5f0682db8138d307e75a84138080c173f40c4ce6215954384a5ea982cb2b51c"} Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.051362 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.064718 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" podStartSLOduration=2.962822239 podStartE2EDuration="34.064697449s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.431420723 +0000 UTC m=+833.241251581" lastFinishedPulling="2026-01-20 15:04:28.533295923 +0000 UTC m=+864.343126791" observedRunningTime="2026-01-20 15:04:29.063329817 +0000 UTC m=+864.873160675" watchObservedRunningTime="2026-01-20 15:04:29.064697449 +0000 UTC m=+864.874528307" Jan 20 15:04:29 crc kubenswrapper[4949]: I0120 15:04:29.085980 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" podStartSLOduration=33.085962093 podStartE2EDuration="33.085962093s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:04:29.0852097 +0000 UTC m=+864.895040558" watchObservedRunningTime="2026-01-20 15:04:29.085962093 +0000 UTC m=+864.895792961" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.062694 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" event={"ID":"070a47eb-d68f-4208-86eb-a99f0a9ce5df","Type":"ContainerStarted","Data":"6907e13b68ff1bf28937db41551815358fc500e7e5c3a0c38225d154de275643"} Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.063502 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.065075 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" event={"ID":"63acb80f-21b4-4255-af60-03a68dd07658","Type":"ContainerStarted","Data":"55d53ff82c9a5207b70f53b4b1c25c226424378c338a279f7caf4b313f79f5df"} Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.065344 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.081019 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" podStartSLOduration=2.534014374 podStartE2EDuration="35.081003601s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.010001252 +0000 UTC m=+832.819832100" lastFinishedPulling="2026-01-20 15:04:29.556990459 +0000 UTC m=+865.366821327" observedRunningTime="2026-01-20 15:04:30.075143754 +0000 UTC m=+865.884974612" watchObservedRunningTime="2026-01-20 15:04:30.081003601 +0000 UTC m=+865.890834459" Jan 20 15:04:30 crc kubenswrapper[4949]: I0120 15:04:30.103309 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" podStartSLOduration=2.672932385 podStartE2EDuration="34.103287777s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.635534403 +0000 UTC m=+833.445365261" lastFinishedPulling="2026-01-20 15:04:29.065889795 +0000 UTC m=+864.875720653" observedRunningTime="2026-01-20 15:04:30.09679347 +0000 UTC m=+865.906624328" watchObservedRunningTime="2026-01-20 15:04:30.103287777 +0000 UTC m=+865.913118645" Jan 20 15:04:31 crc kubenswrapper[4949]: I0120 15:04:31.982936 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-q5h89" Jan 20 15:04:32 crc kubenswrapper[4949]: I0120 15:04:32.043985 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg" Jan 20 15:04:35 crc kubenswrapper[4949]: I0120 15:04:35.938966 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-vhsdx" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.136751 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-th6cb" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.207396 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-ft9st" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.275710 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-tj7jv" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.398544 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-f52ph" Jan 20 15:04:36 crc kubenswrapper[4949]: I0120 15:04:36.582867 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-869947677f-8qg9p" Jan 20 15:04:38 crc kubenswrapper[4949]: I0120 15:04:38.555354 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-559d8b8b56-srtdv" Jan 20 15:04:41 crc kubenswrapper[4949]: I0120 15:04:41.137201 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" event={"ID":"d770793b-0e56-43cc-9707-5d062b8f7c82","Type":"ContainerStarted","Data":"2f340b5963a8d2c677be121f00de85ce001cbb8e921165a80c56312993aca9ce"} Jan 20 15:04:43 crc kubenswrapper[4949]: I0120 15:04:43.212440 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-pzpkv" podStartSLOduration=13.36678752 podStartE2EDuration="47.212408031s" podCreationTimestamp="2026-01-20 15:03:56 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.790169873 +0000 UTC m=+833.600000741" lastFinishedPulling="2026-01-20 15:04:31.635790394 +0000 UTC m=+867.445621252" observedRunningTime="2026-01-20 15:04:43.204002796 +0000 UTC m=+879.013833694" watchObservedRunningTime="2026-01-20 15:04:43.212408031 +0000 UTC m=+879.022238929" Jan 20 15:04:48 crc kubenswrapper[4949]: I0120 15:04:48.221709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" event={"ID":"728be0e4-4dde-4f00-be4f-af6590d7025b","Type":"ContainerStarted","Data":"1186b2508bc643bf6d983a6dcf5b704e772e7280531a671157d0262183a61315"} Jan 20 15:04:48 crc kubenswrapper[4949]: I0120 15:04:48.222728 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:04:48 crc kubenswrapper[4949]: I0120 15:04:48.249080 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" podStartSLOduration=3.282538115 podStartE2EDuration="53.249061673s" podCreationTimestamp="2026-01-20 15:03:55 +0000 UTC" firstStartedPulling="2026-01-20 15:03:57.400442653 +0000 UTC m=+833.210273511" lastFinishedPulling="2026-01-20 15:04:47.366966181 +0000 UTC m=+883.176797069" observedRunningTime="2026-01-20 15:04:48.244814405 +0000 UTC m=+884.054645273" watchObservedRunningTime="2026-01-20 15:04:48.249061673 +0000 UTC m=+884.058892531" Jan 20 15:04:56 crc kubenswrapper[4949]: I0120 15:04:56.352347 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-cc9zv" Jan 20 15:04:57 crc kubenswrapper[4949]: I0120 15:04:57.152864 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:04:57 crc kubenswrapper[4949]: I0120 15:04:57.152989 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.649742 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.651840 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654448 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654559 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654463 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.654596 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tnp5c" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.668787 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.694739 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.697045 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.699283 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.708578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724358 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.724610 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826073 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826156 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826187 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.826283 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.827698 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.827727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.828288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.846234 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"dnsmasq-dns-78dd6ddcc-c7tfd\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.846546 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"dnsmasq-dns-675f4bcbfc-bvzqr\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:12 crc kubenswrapper[4949]: I0120 15:05:12.977816 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.020089 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.274802 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.285905 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.307359 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:13 crc kubenswrapper[4949]: W0120 15:05:13.315536 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9929c1a_9656_4f9b_b7e0_b86b7e1f5ce1.slice/crio-67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3 WatchSource:0}: Error finding container 67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3: Status 404 returned error can't find the container with id 67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3 Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.425579 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" event={"ID":"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1","Type":"ContainerStarted","Data":"67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3"} Jan 20 15:05:13 crc kubenswrapper[4949]: I0120 15:05:13.428438 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" event={"ID":"11aac808-7998-48bc-b54a-75b207b8a12b","Type":"ContainerStarted","Data":"0e4baeea71ced942bdc3203d8a06e8b6e52347327c3fa43612565ec346c789be"} Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.589413 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.615490 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.617042 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.629665 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.669481 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.669960 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.669996 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.771273 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.771334 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.771367 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.772396 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.773204 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.822017 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"dnsmasq-dns-666b6646f7-w9f28\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.935464 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.955537 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.957949 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.963895 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.973278 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.973327 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.973391 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:15 crc kubenswrapper[4949]: I0120 15:05:15.982693 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.074142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.074199 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.074271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.075176 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.075502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.110461 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"dnsmasq-dns-57d769cc4f-dwl6v\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.303295 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.530112 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:16 crc kubenswrapper[4949]: W0120 15:05:16.552372 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c41597_7a3e_40c0_91d3_a73771874abe.slice/crio-d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd WatchSource:0}: Error finding container d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd: Status 404 returned error can't find the container with id d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.780356 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.781415 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.784976 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785385 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785406 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785241 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785499 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785554 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cpjq5" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.785483 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.803413 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.808633 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.893738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.893985 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894011 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894040 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894071 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894183 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894288 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894309 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.894349 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.995937 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.995974 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996013 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996054 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996075 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996188 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996211 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996263 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.996540 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.997214 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.997328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.997899 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.998129 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 20 15:05:16 crc kubenswrapper[4949]: I0120 15:05:16.999117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.003072 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.003113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.051224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.054952 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.077044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.088908 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.093544 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.098769 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.103705 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104037 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104177 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104433 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2fdrl" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.104670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.106430 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.107765 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.117252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.158183 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.202687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204075 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204246 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204344 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204508 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204747 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204863 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204901 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.204974 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.310732 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.310987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311041 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311079 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311095 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311144 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311163 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311185 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.311635 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.312004 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.312179 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.312662 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.313152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.313638 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.319327 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.319377 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.320188 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.326306 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.335175 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.349877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.442992 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.472059 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" event={"ID":"76c41597-7a3e-40c0-91d3-a73771874abe","Type":"ContainerStarted","Data":"d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd"} Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.474966 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" event={"ID":"a79c257a-a3a3-4db1-8f46-a0a499808dbf","Type":"ContainerStarted","Data":"d101eef9e73f679d6e83da351b32371512e88f55aedbbee4bdb3b09d5a79f5d8"} Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.673898 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:05:17 crc kubenswrapper[4949]: W0120 15:05:17.691345 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4b5f65_52fe_4e8b_9d12_817e94e9b629.slice/crio-3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57 WatchSource:0}: Error finding container 3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57: Status 404 returned error can't find the container with id 3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57 Jan 20 15:05:17 crc kubenswrapper[4949]: W0120 15:05:17.912456 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c1f546_0796_457f_8b06_a5ffd11e1b36.slice/crio-554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a WatchSource:0}: Error finding container 554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a: Status 404 returned error can't find the container with id 554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a Jan 20 15:05:17 crc kubenswrapper[4949]: I0120 15:05:17.912841 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.138948 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.140031 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.142737 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-pbjm2" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.143140 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.143268 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.143833 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.152217 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.154239 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235550 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235599 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfwm\" (UniqueName: \"kubernetes.io/projected/ee020527-9591-42dc-b000-3153caede9cf-kube-api-access-6lfwm\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235630 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235651 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee020527-9591-42dc-b000-3153caede9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235678 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235863 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.235921 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.236033 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337111 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfwm\" (UniqueName: \"kubernetes.io/projected/ee020527-9591-42dc-b000-3153caede9cf-kube-api-access-6lfwm\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337181 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee020527-9591-42dc-b000-3153caede9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337207 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337620 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.337637 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.338614 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.338739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.339266 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.339508 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee020527-9591-42dc-b000-3153caede9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.340067 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee020527-9591-42dc-b000-3153caede9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.345378 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.352560 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee020527-9591-42dc-b000-3153caede9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.359264 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfwm\" (UniqueName: \"kubernetes.io/projected/ee020527-9591-42dc-b000-3153caede9cf-kube-api-access-6lfwm\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.368711 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"ee020527-9591-42dc-b000-3153caede9cf\") " pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.468381 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.497043 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerStarted","Data":"554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a"} Jan 20 15:05:18 crc kubenswrapper[4949]: I0120 15:05:18.499575 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerStarted","Data":"3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57"} Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.435302 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.436587 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.448561 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.448716 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.449032 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6lsr6" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.450712 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.462052 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dk7c\" (UniqueName: \"kubernetes.io/projected/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kube-api-access-4dk7c\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557913 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.557934 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558076 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558247 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.558435 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.659914 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.659988 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660038 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dk7c\" (UniqueName: \"kubernetes.io/projected/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kube-api-access-4dk7c\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660077 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660108 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660140 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660200 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.660786 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.663584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.664582 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.665177 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.666190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f03e93a7-24b6-499c-89bc-1bf3e67221a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.696086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.717010 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03e93a7-24b6-499c-89bc-1bf3e67221a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.724196 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dk7c\" (UniqueName: \"kubernetes.io/projected/f03e93a7-24b6-499c-89bc-1bf3e67221a6-kube-api-access-4dk7c\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.762887 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f03e93a7-24b6-499c-89bc-1bf3e67221a6\") " pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.773559 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.774551 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.784942 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.785509 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.794946 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.795123 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.827217 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4xkkx" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb74p\" (UniqueName: \"kubernetes.io/projected/485725f6-91f1-413b-89f5-21bde785bd94-kube-api-access-kb74p\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895232 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-combined-ca-bundle\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895267 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-kolla-config\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895342 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-memcached-tls-certs\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.895379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-config-data\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997062 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-memcached-tls-certs\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-config-data\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb74p\" (UniqueName: \"kubernetes.io/projected/485725f6-91f1-413b-89f5-21bde785bd94-kube-api-access-kb74p\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-combined-ca-bundle\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997244 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-kolla-config\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.997956 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-kolla-config\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:19 crc kubenswrapper[4949]: I0120 15:05:19.999026 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/485725f6-91f1-413b-89f5-21bde785bd94-config-data\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.004729 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-combined-ca-bundle\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.005082 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/485725f6-91f1-413b-89f5-21bde785bd94-memcached-tls-certs\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.027283 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb74p\" (UniqueName: \"kubernetes.io/projected/485725f6-91f1-413b-89f5-21bde785bd94-kube-api-access-kb74p\") pod \"memcached-0\" (UID: \"485725f6-91f1-413b-89f5-21bde785bd94\") " pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.067008 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 20 15:05:20 crc kubenswrapper[4949]: W0120 15:05:20.071917 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee020527_9591_42dc_b000_3153caede9cf.slice/crio-c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e WatchSource:0}: Error finding container c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e: Status 404 returned error can't find the container with id c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.128468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.357797 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.408056 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.541475 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"485725f6-91f1-413b-89f5-21bde785bd94","Type":"ContainerStarted","Data":"ea46fccc499c4238859782b49a52a77a8fb6eabb8902db29b5a5b0b74dbaf84b"} Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.544699 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerStarted","Data":"c31af966204fdef1efb3fd3f8d15d6d62278f7562a7202a66379547f2ce5363e"} Jan 20 15:05:20 crc kubenswrapper[4949]: I0120 15:05:20.546883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerStarted","Data":"a76fa3941f969127afd41ade63807bce736ad8187b911dee25cdb85411f3f7cf"} Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.648658 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.649557 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.651807 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bv6d5" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.668910 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.723185 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"kube-state-metrics-0\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.824684 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"kube-state-metrics-0\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.845375 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"kube-state-metrics-0\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " pod="openstack/kube-state-metrics-0" Jan 20 15:05:21 crc kubenswrapper[4949]: I0120 15:05:21.969191 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:05:22 crc kubenswrapper[4949]: I0120 15:05:22.510615 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:05:24 crc kubenswrapper[4949]: I0120 15:05:24.861408 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:24 crc kubenswrapper[4949]: I0120 15:05:24.863537 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:24 crc kubenswrapper[4949]: I0120 15:05:24.893600 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.001590 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.001769 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.001824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.102760 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.102804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.102837 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.103211 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.103463 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.121795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"community-operators-825w7\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.190009 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.584349 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerStarted","Data":"b6f194539b862d0ee8b6be35de75344541fb71d8b75e2a6809fe23930f272acc"} Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.858594 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nqhh2"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.859503 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.871976 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.872003 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.873377 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dk9sh" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.874275 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.897041 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kbnxn"] Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.903718 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:25 crc kubenswrapper[4949]: I0120 15:05:25.919117 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kbnxn"] Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015305 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-etc-ovs\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc6b\" (UniqueName: \"kubernetes.io/projected/bce99786-819a-47cc-8ad7-0c5581f034fa-kube-api-access-7rc6b\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015444 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-log\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-run\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015507 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-combined-ca-bundle\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015587 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-log-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015636 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkqbp\" (UniqueName: \"kubernetes.io/projected/c4179fca-4378-4347-a519-96120d9ae1cc-kube-api-access-fkqbp\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-lib\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015750 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-ovn-controller-tls-certs\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015862 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015922 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bce99786-819a-47cc-8ad7-0c5581f034fa-scripts\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.015957 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4179fca-4378-4347-a519-96120d9ae1cc-scripts\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.016024 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.117883 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.117956 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-etc-ovs\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc6b\" (UniqueName: \"kubernetes.io/projected/bce99786-819a-47cc-8ad7-0c5581f034fa-kube-api-access-7rc6b\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118038 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-log\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-run\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118087 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-combined-ca-bundle\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118107 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-log-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkqbp\" (UniqueName: \"kubernetes.io/projected/c4179fca-4378-4347-a519-96120d9ae1cc-kube-api-access-fkqbp\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-lib\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118174 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-ovn-controller-tls-certs\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bce99786-819a-47cc-8ad7-0c5581f034fa-scripts\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118276 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4179fca-4378-4347-a519-96120d9ae1cc-scripts\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-etc-ovs\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118651 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-lib\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118700 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-log-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-log\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.118910 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bce99786-819a-47cc-8ad7-0c5581f034fa-var-run\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.119284 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.119419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4179fca-4378-4347-a519-96120d9ae1cc-var-run-ovn\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.120936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4179fca-4378-4347-a519-96120d9ae1cc-scripts\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.131363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bce99786-819a-47cc-8ad7-0c5581f034fa-scripts\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.132379 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-combined-ca-bundle\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.133833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc6b\" (UniqueName: \"kubernetes.io/projected/bce99786-819a-47cc-8ad7-0c5581f034fa-kube-api-access-7rc6b\") pod \"ovn-controller-ovs-kbnxn\" (UID: \"bce99786-819a-47cc-8ad7-0c5581f034fa\") " pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.134420 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkqbp\" (UniqueName: \"kubernetes.io/projected/c4179fca-4378-4347-a519-96120d9ae1cc-kube-api-access-fkqbp\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.135297 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4179fca-4378-4347-a519-96120d9ae1cc-ovn-controller-tls-certs\") pod \"ovn-controller-nqhh2\" (UID: \"c4179fca-4378-4347-a519-96120d9ae1cc\") " pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.205090 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.228846 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.738712 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.753115 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.756388 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.756683 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-psvcg" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.757224 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.767951 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.767981 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.769641 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834255 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834367 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834408 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834436 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834465 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlcvg\" (UniqueName: \"kubernetes.io/projected/ab38c923-ec3b-400d-864a-c5e8a0d53999-kube-api-access-hlcvg\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834543 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.834598 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936715 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.936878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlcvg\" (UniqueName: \"kubernetes.io/projected/ab38c923-ec3b-400d-864a-c5e8a0d53999-kube-api-access-hlcvg\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937071 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937223 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.937681 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.938717 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab38c923-ec3b-400d-864a-c5e8a0d53999-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.948869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.949346 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.952061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.956198 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlcvg\" (UniqueName: \"kubernetes.io/projected/ab38c923-ec3b-400d-864a-c5e8a0d53999-kube-api-access-hlcvg\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.958117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab38c923-ec3b-400d-864a-c5e8a0d53999-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:26 crc kubenswrapper[4949]: I0120 15:05:26.959733 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab38c923-ec3b-400d-864a-c5e8a0d53999\") " pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.092067 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.153017 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.153071 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.232239 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.239724 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.245132 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.342400 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.342578 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.342616 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.443827 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.443874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.443915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.444639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.444940 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.467573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"certified-operators-qsqhq\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:27 crc kubenswrapper[4949]: I0120 15:05:27.562345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.359986 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.361263 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.367991 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7lpvk" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.368017 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.368132 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.369319 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.403954 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471656 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8cg\" (UniqueName: \"kubernetes.io/projected/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-kube-api-access-6f8cg\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471817 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-config\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471841 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471880 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.471905 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573442 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573504 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573587 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573620 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573648 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8cg\" (UniqueName: \"kubernetes.io/projected/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-kube-api-access-6f8cg\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573673 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573691 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-config\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.573714 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.574696 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.574881 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.575793 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.575827 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-config\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.580501 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.581639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.581775 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.594127 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8cg\" (UniqueName: \"kubernetes.io/projected/17c9cb64-1ff5-4087-b424-1c2bb7398ba0-kube-api-access-6f8cg\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.598934 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"17c9cb64-1ff5-4087-b424-1c2bb7398ba0\") " pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:28 crc kubenswrapper[4949]: I0120 15:05:28.716897 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.643680 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.644324 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dck96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-dwl6v_openstack(a79c257a-a3a3-4db1-8f46-a0a499808dbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.645463 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" podUID="a79c257a-a3a3-4db1-8f46-a0a499808dbf" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.676994 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" podUID="a79c257a-a3a3-4db1-8f46-a0a499808dbf" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.702901 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.703075 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqgbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-w9f28_openstack(76c41597-7a3e-40c0-91d3-a73771874abe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:35 crc kubenswrapper[4949]: E0120 15:05:35.704270 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" Jan 20 15:05:36 crc kubenswrapper[4949]: E0120 15:05:36.681365 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.217386 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.217787 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbtn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-c7tfd_openstack(c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.219033 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" podUID="c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.271016 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.275256 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhxf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bvzqr_openstack(11aac808-7998-48bc-b54a-75b207b8a12b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:05:37 crc kubenswrapper[4949]: E0120 15:05:37.279104 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" podUID="11aac808-7998-48bc-b54a-75b207b8a12b" Jan 20 15:05:37 crc kubenswrapper[4949]: I0120 15:05:37.938499 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:37 crc kubenswrapper[4949]: I0120 15:05:37.944063 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2"] Jan 20 15:05:37 crc kubenswrapper[4949]: I0120 15:05:37.959315 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.055822 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.134416 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kbnxn"] Jan 20 15:05:38 crc kubenswrapper[4949]: W0120 15:05:38.168452 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe70405_ca2b_4d54_9b46_c798b4ff8583.slice/crio-eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7 WatchSource:0}: Error finding container eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7: Status 404 returned error can't find the container with id eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7 Jan 20 15:05:38 crc kubenswrapper[4949]: W0120 15:05:38.177228 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbce99786_819a_47cc_8ad7_0c5581f034fa.slice/crio-0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d WatchSource:0}: Error finding container 0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d: Status 404 returned error can't find the container with id 0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.228926 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.235891 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342387 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") pod \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342788 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") pod \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342842 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") pod \"11aac808-7998-48bc-b54a-75b207b8a12b\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342909 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") pod \"11aac808-7998-48bc-b54a-75b207b8a12b\" (UID: \"11aac808-7998-48bc-b54a-75b207b8a12b\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.342995 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") pod \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\" (UID: \"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1\") " Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.343371 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config" (OuterVolumeSpecName: "config") pod "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" (UID: "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.343625 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.343637 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config" (OuterVolumeSpecName: "config") pod "11aac808-7998-48bc-b54a-75b207b8a12b" (UID: "11aac808-7998-48bc-b54a-75b207b8a12b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.344272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" (UID: "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.356151 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6" (OuterVolumeSpecName: "kube-api-access-hbtn6") pod "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" (UID: "c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1"). InnerVolumeSpecName "kube-api-access-hbtn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.356347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4" (OuterVolumeSpecName: "kube-api-access-mhxf4") pod "11aac808-7998-48bc-b54a-75b207b8a12b" (UID: "11aac808-7998-48bc-b54a-75b207b8a12b"). InnerVolumeSpecName "kube-api-access-mhxf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445805 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhxf4\" (UniqueName: \"kubernetes.io/projected/11aac808-7998-48bc-b54a-75b207b8a12b-kube-api-access-mhxf4\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445841 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11aac808-7998-48bc-b54a-75b207b8a12b-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445850 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.445859 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbtn6\" (UniqueName: \"kubernetes.io/projected/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1-kube-api-access-hbtn6\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.660125 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.707688 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerStarted","Data":"8994dbe5d18b13b8b627eb0e0b8e2db7be8fe96864e521cb6ae9251c5b7d8268"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.709939 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" event={"ID":"11aac808-7998-48bc-b54a-75b207b8a12b","Type":"ContainerDied","Data":"0e4baeea71ced942bdc3203d8a06e8b6e52347327c3fa43612565ec346c789be"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.710007 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bvzqr" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.712102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2" event={"ID":"c4179fca-4378-4347-a519-96120d9ae1cc","Type":"ContainerStarted","Data":"ee99f12c2d22a1480e7d18e7e8bb90000463389249ff63a5339628c3fbdaddeb"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.715230 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17c9cb64-1ff5-4087-b424-1c2bb7398ba0","Type":"ContainerStarted","Data":"e16506a9c1df886d9ba8a98349c8641329b795adc65c8bff9999d7fff4b787e3"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.718079 4949 generic.go:334] "Generic (PLEG): container finished" podID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerID="f03665b194c4174cebb25646bd720102812b8ac22b08bd892bcbfae2b602d925" exitCode=0 Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.718131 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"f03665b194c4174cebb25646bd720102812b8ac22b08bd892bcbfae2b602d925"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.718150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerStarted","Data":"68cb870c434ab55233ed72365d1bc78370679ae532604bfe367507e2c57caf3a"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.721148 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerStarted","Data":"0c6ab1decee349eeac21eebacdf030a4e2f976892468dae3bfc0b721c80ab10d"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.725165 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerStarted","Data":"eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.734620 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.734679 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c7tfd" event={"ID":"c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1","Type":"ContainerDied","Data":"67b8f6843f33b9f6a96e9ac298202b8f0fcdb5407759da49848910b68ac660e3"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.740031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"485725f6-91f1-413b-89f5-21bde785bd94","Type":"ContainerStarted","Data":"a14d10d85d043b261760ba75f7325ee1eef372ddfaf1e2f43ad87e9041368654"} Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.751082 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.777638 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.980137437 podStartE2EDuration="19.777618199s" podCreationTimestamp="2026-01-20 15:05:19 +0000 UTC" firstStartedPulling="2026-01-20 15:05:20.445789749 +0000 UTC m=+916.255620607" lastFinishedPulling="2026-01-20 15:05:37.243270511 +0000 UTC m=+933.053101369" observedRunningTime="2026-01-20 15:05:38.773336872 +0000 UTC m=+934.583167730" watchObservedRunningTime="2026-01-20 15:05:38.777618199 +0000 UTC m=+934.587449067" Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.826223 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.827152 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bvzqr"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.843847 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:38 crc kubenswrapper[4949]: I0120 15:05:38.849207 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c7tfd"] Jan 20 15:05:39 crc kubenswrapper[4949]: W0120 15:05:39.126986 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab38c923_ec3b_400d_864a_c5e8a0d53999.slice/crio-e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc WatchSource:0}: Error finding container e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc: Status 404 returned error can't find the container with id e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.749160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerStarted","Data":"0e51d1dab36c13e13a0a4d54e015bfc45c6c88e1be2b3744ce7393168375f2b7"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.751814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerStarted","Data":"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.754594 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab38c923-ec3b-400d-864a-c5e8a0d53999","Type":"ContainerStarted","Data":"e26a41da0f055cebb48a67cc90b8938e2b8ca0bb75248cf1927be618cd1d71bc"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.756229 4949 generic.go:334] "Generic (PLEG): container finished" podID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerID="7116bccdb347550321602be8ab7c8a5038e543ed30d76d1e6cf7ae23a1c0748e" exitCode=0 Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.756289 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"7116bccdb347550321602be8ab7c8a5038e543ed30d76d1e6cf7ae23a1c0748e"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.759319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerStarted","Data":"62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe"} Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.759497 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 15:05:39 crc kubenswrapper[4949]: I0120 15:05:39.839599 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.949972483 podStartE2EDuration="18.839578711s" podCreationTimestamp="2026-01-20 15:05:21 +0000 UTC" firstStartedPulling="2026-01-20 15:05:25.299728678 +0000 UTC m=+921.109559526" lastFinishedPulling="2026-01-20 15:05:39.189334886 +0000 UTC m=+934.999165754" observedRunningTime="2026-01-20 15:05:39.838185547 +0000 UTC m=+935.648016405" watchObservedRunningTime="2026-01-20 15:05:39.839578711 +0000 UTC m=+935.649409569" Jan 20 15:05:40 crc kubenswrapper[4949]: I0120 15:05:40.768544 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerStarted","Data":"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c"} Jan 20 15:05:40 crc kubenswrapper[4949]: I0120 15:05:40.805706 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11aac808-7998-48bc-b54a-75b207b8a12b" path="/var/lib/kubelet/pods/11aac808-7998-48bc-b54a-75b207b8a12b/volumes" Jan 20 15:05:40 crc kubenswrapper[4949]: I0120 15:05:40.806103 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1" path="/var/lib/kubelet/pods/c9929c1a-9656-4f9b-b7e0-b86b7e1f5ce1/volumes" Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.785195 4949 generic.go:334] "Generic (PLEG): container finished" podID="ee020527-9591-42dc-b000-3153caede9cf" containerID="8994dbe5d18b13b8b627eb0e0b8e2db7be8fe96864e521cb6ae9251c5b7d8268" exitCode=0 Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.785866 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerDied","Data":"8994dbe5d18b13b8b627eb0e0b8e2db7be8fe96864e521cb6ae9251c5b7d8268"} Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.798908 4949 generic.go:334] "Generic (PLEG): container finished" podID="f03e93a7-24b6-499c-89bc-1bf3e67221a6" containerID="0e51d1dab36c13e13a0a4d54e015bfc45c6c88e1be2b3744ce7393168375f2b7" exitCode=0 Jan 20 15:05:42 crc kubenswrapper[4949]: I0120 15:05:42.805817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerDied","Data":"0e51d1dab36c13e13a0a4d54e015bfc45c6c88e1be2b3744ce7393168375f2b7"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.807617 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17c9cb64-1ff5-4087-b424-1c2bb7398ba0","Type":"ContainerStarted","Data":"aa8606627e14db1a2d93aeccc484e1113035ddc4843f25b4eaa277d98ca9bdf6"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.809670 4949 generic.go:334] "Generic (PLEG): container finished" podID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerID="97885d51078adaf7b0201e67e5028b4306ed2924b2eb0990ba98b4acc792105a" exitCode=0 Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.809735 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"97885d51078adaf7b0201e67e5028b4306ed2924b2eb0990ba98b4acc792105a"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.811552 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab38c923-ec3b-400d-864a-c5e8a0d53999","Type":"ContainerStarted","Data":"4842decb917138d43aa39e41244a9b936cfb8a7c419bd4b10603536aea18dd88"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.823171 4949 generic.go:334] "Generic (PLEG): container finished" podID="bce99786-819a-47cc-8ad7-0c5581f034fa" containerID="d698d42622c6944052684003b7edbe49f368043aacc69933df75aba421a7adfc" exitCode=0 Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.823253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerDied","Data":"d698d42622c6944052684003b7edbe49f368043aacc69933df75aba421a7adfc"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.829317 4949 generic.go:334] "Generic (PLEG): container finished" podID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerID="99255b61b7f0088c38e333002cd268cb5398ee7d7f296126fdae25ebda59cb81" exitCode=0 Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.829404 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"99255b61b7f0088c38e333002cd268cb5398ee7d7f296126fdae25ebda59cb81"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.832883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee020527-9591-42dc-b000-3153caede9cf","Type":"ContainerStarted","Data":"7fd378f99940f01fbe8656237eac066146a9e0e0410c6b59b1fe0f0d8d2f10c9"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.835621 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f03e93a7-24b6-499c-89bc-1bf3e67221a6","Type":"ContainerStarted","Data":"d0e706391c1e92bb8858dd4b366b220476fd009e5378badc350054a7e6da12eb"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.847787 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2" event={"ID":"c4179fca-4378-4347-a519-96120d9ae1cc","Type":"ContainerStarted","Data":"813d117263ac666d2eb775e981e7bd4c19e098da4daf5c8f06c310935ae71d0f"} Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.847938 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nqhh2" Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.917823 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nqhh2" podStartSLOduration=14.239679705 podStartE2EDuration="18.917794122s" podCreationTimestamp="2026-01-20 15:05:25 +0000 UTC" firstStartedPulling="2026-01-20 15:05:38.005711384 +0000 UTC m=+933.815542242" lastFinishedPulling="2026-01-20 15:05:42.683825781 +0000 UTC m=+938.493656659" observedRunningTime="2026-01-20 15:05:43.904077261 +0000 UTC m=+939.713908129" watchObservedRunningTime="2026-01-20 15:05:43.917794122 +0000 UTC m=+939.727624990" Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.953341 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.485825911 podStartE2EDuration="26.953313354s" podCreationTimestamp="2026-01-20 15:05:17 +0000 UTC" firstStartedPulling="2026-01-20 15:05:20.076206487 +0000 UTC m=+915.886037345" lastFinishedPulling="2026-01-20 15:05:37.54369393 +0000 UTC m=+933.353524788" observedRunningTime="2026-01-20 15:05:43.92458536 +0000 UTC m=+939.734416218" watchObservedRunningTime="2026-01-20 15:05:43.953313354 +0000 UTC m=+939.763144212" Jan 20 15:05:43 crc kubenswrapper[4949]: I0120 15:05:43.955350 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.371276129 podStartE2EDuration="25.955338599s" podCreationTimestamp="2026-01-20 15:05:18 +0000 UTC" firstStartedPulling="2026-01-20 15:05:20.383785817 +0000 UTC m=+916.193616675" lastFinishedPulling="2026-01-20 15:05:37.967848287 +0000 UTC m=+933.777679145" observedRunningTime="2026-01-20 15:05:43.951228386 +0000 UTC m=+939.761059264" watchObservedRunningTime="2026-01-20 15:05:43.955338599 +0000 UTC m=+939.765169447" Jan 20 15:05:44 crc kubenswrapper[4949]: I0120 15:05:44.861258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerStarted","Data":"52d510ee9c74ef34881edcdb2e4eb447b39fa992cb5ffc9b736b70708e128356"} Jan 20 15:05:45 crc kubenswrapper[4949]: I0120 15:05:45.129957 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.887441 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"17c9cb64-1ff5-4087-b424-1c2bb7398ba0","Type":"ContainerStarted","Data":"ef4a80dffe44bab4fa4b5cf30ae59a0e9bf0ef7a10071754727422ce9bca13be"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.889395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerStarted","Data":"a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.890983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab38c923-ec3b-400d-864a-c5e8a0d53999","Type":"ContainerStarted","Data":"8c550f8f9f911c2961719877c6aef13b282cbc03642823ffe9e74bd0bde55ee5"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.893698 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kbnxn" event={"ID":"bce99786-819a-47cc-8ad7-0c5581f034fa","Type":"ContainerStarted","Data":"b09db70819b05ffad6cf612985843206060f76b4eeb837540af6df28a5ab5c8b"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.893798 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.893903 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.895778 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerStarted","Data":"cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624"} Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.919311 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.501357586 podStartE2EDuration="20.919295596s" podCreationTimestamp="2026-01-20 15:05:27 +0000 UTC" firstStartedPulling="2026-01-20 15:05:38.18415031 +0000 UTC m=+933.993981168" lastFinishedPulling="2026-01-20 15:05:46.60208831 +0000 UTC m=+942.411919178" observedRunningTime="2026-01-20 15:05:47.913724147 +0000 UTC m=+943.723555005" watchObservedRunningTime="2026-01-20 15:05:47.919295596 +0000 UTC m=+943.729126454" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.932089 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.451988898 podStartE2EDuration="22.932065416s" podCreationTimestamp="2026-01-20 15:05:25 +0000 UTC" firstStartedPulling="2026-01-20 15:05:39.134135411 +0000 UTC m=+934.943966269" lastFinishedPulling="2026-01-20 15:05:46.614211929 +0000 UTC m=+942.424042787" observedRunningTime="2026-01-20 15:05:47.928954927 +0000 UTC m=+943.738785785" watchObservedRunningTime="2026-01-20 15:05:47.932065416 +0000 UTC m=+943.741896264" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.950609 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsqhq" podStartSLOduration=14.05141923 podStartE2EDuration="20.950584602s" podCreationTimestamp="2026-01-20 15:05:27 +0000 UTC" firstStartedPulling="2026-01-20 15:05:39.758764973 +0000 UTC m=+935.568595831" lastFinishedPulling="2026-01-20 15:05:46.657930345 +0000 UTC m=+942.467761203" observedRunningTime="2026-01-20 15:05:47.946837322 +0000 UTC m=+943.756668180" watchObservedRunningTime="2026-01-20 15:05:47.950584602 +0000 UTC m=+943.760415460" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.966959 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-825w7" podStartSLOduration=16.424811965 podStartE2EDuration="23.966937138s" podCreationTimestamp="2026-01-20 15:05:24 +0000 UTC" firstStartedPulling="2026-01-20 15:05:39.120867995 +0000 UTC m=+934.930698853" lastFinishedPulling="2026-01-20 15:05:46.662993168 +0000 UTC m=+942.472824026" observedRunningTime="2026-01-20 15:05:47.961369629 +0000 UTC m=+943.771200487" watchObservedRunningTime="2026-01-20 15:05:47.966937138 +0000 UTC m=+943.776767996" Jan 20 15:05:47 crc kubenswrapper[4949]: I0120 15:05:47.984700 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kbnxn" podStartSLOduration=18.512804911 podStartE2EDuration="22.984685289s" podCreationTimestamp="2026-01-20 15:05:25 +0000 UTC" firstStartedPulling="2026-01-20 15:05:38.184225412 +0000 UTC m=+933.994056270" lastFinishedPulling="2026-01-20 15:05:42.6561058 +0000 UTC m=+938.465936648" observedRunningTime="2026-01-20 15:05:47.981243928 +0000 UTC m=+943.791074786" watchObservedRunningTime="2026-01-20 15:05:47.984685289 +0000 UTC m=+943.794516147" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.093183 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.137681 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.468888 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.469265 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.539766 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.718107 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.907956 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:48 crc kubenswrapper[4949]: I0120 15:05:48.962904 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.013686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.170002 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.207084 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.213998 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.216020 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.226863 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234025 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234086 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234159 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.234196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.304560 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q26vt"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.306287 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.329882 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q26vt"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.330042 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338134 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338211 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-combined-ca-bundle\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovs-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338314 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k78pt\" (UniqueName: \"kubernetes.io/projected/f4968375-00d3-4db1-93b4-db0808c464b2-kube-api-access-k78pt\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338388 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338408 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338429 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovn-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.338450 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4968375-00d3-4db1-93b4-db0808c464b2-config\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.339272 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.339825 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.341082 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.383221 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"dnsmasq-dns-5bf47b49b7-p8gng\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-combined-ca-bundle\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456614 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovs-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456658 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456685 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k78pt\" (UniqueName: \"kubernetes.io/projected/f4968375-00d3-4db1-93b4-db0808c464b2-kube-api-access-k78pt\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456755 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovn-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.456786 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4968375-00d3-4db1-93b4-db0808c464b2-config\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.457808 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4968375-00d3-4db1-93b4-db0808c464b2-config\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.458012 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovs-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.458166 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f4968375-00d3-4db1-93b4-db0808c464b2-ovn-rundir\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.467157 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.467380 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4968375-00d3-4db1-93b4-db0808c464b2-combined-ca-bundle\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.475579 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k78pt\" (UniqueName: \"kubernetes.io/projected/f4968375-00d3-4db1-93b4-db0808c464b2-kube-api-access-k78pt\") pod \"ovn-controller-metrics-q26vt\" (UID: \"f4968375-00d3-4db1-93b4-db0808c464b2\") " pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.552444 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.612156 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.644365 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.644963 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q26vt" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.646082 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.649996 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.657798 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.659704 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.722829 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.752342 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.753359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.756132 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765626 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") pod \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") pod \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765717 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") pod \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\" (UID: \"a79c257a-a3a3-4db1-8f46-a0a499808dbf\") " Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.765903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766009 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766057 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766109 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766129 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.766862 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config" (OuterVolumeSpecName: "config") pod "a79c257a-a3a3-4db1-8f46-a0a499808dbf" (UID: "a79c257a-a3a3-4db1-8f46-a0a499808dbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.767575 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a79c257a-a3a3-4db1-8f46-a0a499808dbf" (UID: "a79c257a-a3a3-4db1-8f46-a0a499808dbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.778621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96" (OuterVolumeSpecName: "kube-api-access-dck96") pod "a79c257a-a3a3-4db1-8f46-a0a499808dbf" (UID: "a79c257a-a3a3-4db1-8f46-a0a499808dbf"). InnerVolumeSpecName "kube-api-access-dck96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.796927 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.797232 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.798122 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.812281 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.813977 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.816190 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.823227 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868350 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868400 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868495 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868524 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868577 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.868781 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869724 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869785 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.869993 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.870009 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79c257a-a3a3-4db1-8f46-a0a499808dbf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.870020 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dck96\" (UniqueName: \"kubernetes.io/projected/a79c257a-a3a3-4db1-8f46-a0a499808dbf-kube-api-access-dck96\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.886125 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.923661 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" event={"ID":"a79c257a-a3a3-4db1-8f46-a0a499808dbf","Type":"ContainerDied","Data":"d101eef9e73f679d6e83da351b32371512e88f55aedbbee4bdb3b09d5a79f5d8"} Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.923927 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dwl6v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.941426 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.942603 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.951188 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.965171 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.980751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981208 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981309 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981628 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.981883 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.983193 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.983782 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:49 crc kubenswrapper[4949]: I0120 15:05:49.997400 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"keystone-31fc-account-create-update-cvjjl\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.014752 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.023121 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"keystone-db-create-ctk5g\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.046580 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dwl6v"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.064560 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.066142 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.069899 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.073103 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.079164 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.083929 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.083971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.085258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.114370 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"placement-db-create-zr22v\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.116744 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.143947 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.188748 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.188807 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.199774 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.200969 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.204735 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.205019 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.205271 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7zw9t" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.208146 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.211476 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.269167 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q26vt"] Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.269561 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.289995 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290038 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290102 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290143 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-config\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmp48\" (UniqueName: \"kubernetes.io/projected/425d9be8-fa72-4cbe-bcc7-444e46e67a08-kube-api-access-tmp48\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290391 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-scripts\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290464 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290552 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290592 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.290749 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:50 crc kubenswrapper[4949]: I0120 15:05:50.307438 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"placement-68d2-account-create-update-7xhv6\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.386894 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.391995 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392154 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392330 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-config\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmp48\" (UniqueName: \"kubernetes.io/projected/425d9be8-fa72-4cbe-bcc7-444e46e67a08-kube-api-access-tmp48\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392405 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-scripts\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.392769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.393246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-config\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.394163 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/425d9be8-fa72-4cbe-bcc7-444e46e67a08-scripts\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.395896 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.396102 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.396581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d9be8-fa72-4cbe-bcc7-444e46e67a08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.411950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmp48\" (UniqueName: \"kubernetes.io/projected/425d9be8-fa72-4cbe-bcc7-444e46e67a08-kube-api-access-tmp48\") pod \"ovn-northd-0\" (UID: \"425d9be8-fa72-4cbe-bcc7-444e46e67a08\") " pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.529721 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.801303 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79c257a-a3a3-4db1-8f46-a0a499808dbf" path="/var/lib/kubelet/pods/a79c257a-a3a3-4db1-8f46-a0a499808dbf/volumes" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.831365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-2vttb\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.886395 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.946572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q26vt" event={"ID":"f4968375-00d3-4db1-93b4-db0808c464b2","Type":"ContainerStarted","Data":"a263ee384fa36c603a203a13cd37f1a2106615328c59785eea8e98eb32a02baf"} Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:50.948943 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerStarted","Data":"6996c0b6103b18456eb99c9a9d46337d5c6171dee7a722eba0c900e6409fff97"} Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:51.974738 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.190576 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.191097 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.254036 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.334546 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.335438 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.347253 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.368262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.368435 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.439681 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.440696 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.443483 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.450761 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470354 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470431 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470476 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.470575 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.471339 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.488661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"glance-db-create-x9bkl\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.572361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.572711 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.573255 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.595244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"glance-6dd3-account-create-update-k72x5\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.656526 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:05:55 crc kubenswrapper[4949]: I0120 15:05:55.755668 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.047035 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.141153 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.592720 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.612281 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.636442 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.642381 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.657443 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.664578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.670848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:05:56 crc kubenswrapper[4949]: I0120 15:05:56.677584 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.122794 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.124713 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.126760 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.133155 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152025 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152386 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152432 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.152961 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.153066 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf" gracePeriod=600 Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.222848 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.222983 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.324116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.324261 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.325375 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.344115 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"root-account-create-update-p27hz\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.450123 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.562578 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.562659 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:57 crc kubenswrapper[4949]: I0120 15:05:57.613041 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:58 crc kubenswrapper[4949]: I0120 15:05:58.007462 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-825w7" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" containerID="cri-o://a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3" gracePeriod=2 Jan 20 15:05:58 crc kubenswrapper[4949]: I0120 15:05:58.053631 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:05:58 crc kubenswrapper[4949]: I0120 15:05:58.447682 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.006490 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod625a0372_8b33_45fa_ad97_ad8e362be0fb.slice/crio-a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18 WatchSource:0}: Error finding container a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18: Status 404 returned error can't find the container with id a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18 Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.007678 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod425d9be8_fa72_4cbe_bcc7_444e46e67a08.slice/crio-e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02 WatchSource:0}: Error finding container e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02: Status 404 returned error can't find the container with id e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.015425 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q26vt" event={"ID":"f4968375-00d3-4db1-93b4-db0808c464b2","Type":"ContainerStarted","Data":"2a76f84e4e9d777e8e9e611fb0e6cc406b27b36ffe1f3a02dbd8fd19ffa65008"} Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.017351 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2993cec_87be_40ef_8f45_51ad7072f115.slice/crio-69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629 WatchSource:0}: Error finding container 69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629: Status 404 returned error can't find the container with id 69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.018360 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerStarted","Data":"a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18"} Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.022370 4949 generic.go:334] "Generic (PLEG): container finished" podID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerID="a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3" exitCode=0 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.022430 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3"} Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.024432 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf" exitCode=0 Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.025187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf"} Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.025218 4949 scope.go:117] "RemoveContainer" containerID="680d8732678521892e1f93d2934dba33b63ebd2fe03470cc1d56dd0bdca5de1c" Jan 20 15:05:59 crc kubenswrapper[4949]: W0120 15:05:59.040860 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa3acdd4_7817_4358_8afb_90399e3fa23f.slice/crio-e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d WatchSource:0}: Error finding container e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d: Status 404 returned error can't find the container with id e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.043610 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.070966 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q26vt" podStartSLOduration=10.070945321 podStartE2EDuration="10.070945321s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:05:59.041452743 +0000 UTC m=+954.851283641" watchObservedRunningTime="2026-01-20 15:05:59.070945321 +0000 UTC m=+954.880776189" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.188922 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.519412 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.566806 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") pod \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.566885 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") pod \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.567071 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") pod \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\" (UID: \"fec4e3eb-8e0c-4448-bd89-854714f2a98b\") " Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.568910 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities" (OuterVolumeSpecName: "utilities") pod "fec4e3eb-8e0c-4448-bd89-854714f2a98b" (UID: "fec4e3eb-8e0c-4448-bd89-854714f2a98b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.586283 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk" (OuterVolumeSpecName: "kube-api-access-p57xk") pod "fec4e3eb-8e0c-4448-bd89-854714f2a98b" (UID: "fec4e3eb-8e0c-4448-bd89-854714f2a98b"). InnerVolumeSpecName "kube-api-access-p57xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.652648 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fec4e3eb-8e0c-4448-bd89-854714f2a98b" (UID: "fec4e3eb-8e0c-4448-bd89-854714f2a98b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.653866 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.680298 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p57xk\" (UniqueName: \"kubernetes.io/projected/fec4e3eb-8e0c-4448-bd89-854714f2a98b-kube-api-access-p57xk\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.680324 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:05:59 crc kubenswrapper[4949]: I0120 15:05:59.680335 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4e3eb-8e0c-4448-bd89-854714f2a98b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.034643 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerStarted","Data":"dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.035017 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerStarted","Data":"69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.042280 4949 generic.go:334] "Generic (PLEG): container finished" podID="76c41597-7a3e-40c0-91d3-a73771874abe" containerID="0999acc591f05e388476f260970e6ec61337b2d0b65b72617ceb59d4faf4d31f" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.042339 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" event={"ID":"76c41597-7a3e-40c0-91d3-a73771874abe","Type":"ContainerDied","Data":"0999acc591f05e388476f260970e6ec61337b2d0b65b72617ceb59d4faf4d31f"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.044435 4949 generic.go:334] "Generic (PLEG): container finished" podID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerID="68276b2a29712da0c8b68150ac12b491bc8fd4c69ba0f9839e1490af457e18ac" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.044532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerDied","Data":"68276b2a29712da0c8b68150ac12b491bc8fd4c69ba0f9839e1490af457e18ac"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.044563 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerStarted","Data":"f9f5d1619d230fe16e03f871babb60f8165c69870d0389a062447e2bf198b69d"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.052535 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerStarted","Data":"164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.062779 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-31fc-account-create-update-cvjjl" podStartSLOduration=11.062757366 podStartE2EDuration="11.062757366s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.052185757 +0000 UTC m=+955.862016615" watchObservedRunningTime="2026-01-20 15:06:00.062757366 +0000 UTC m=+955.872588234" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.068160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerStarted","Data":"094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.068214 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerStarted","Data":"cdf521eae40e69f93d6255ba78fcd958a008dea83db5b21eb57ba5e7b4bb45ee"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.098417 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.145874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerStarted","Data":"75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.145920 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerStarted","Data":"1d78e596c088c5f26c8586bee94d254159d14ebd3299015b971a3417bb01e379"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.170873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerStarted","Data":"60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.170922 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerStarted","Data":"e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.173070 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerStarted","Data":"55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.173099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerStarted","Data":"1154256b44f2049cb5a2d456438d141ab6e6260d36590284bfd2b45c26eb8830"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.174764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"425d9be8-fa72-4cbe-bcc7-444e46e67a08","Type":"ContainerStarted","Data":"e128e46230bf48d78276034a37973e22d9cb0f525e1067fa0bd903c4452baf02"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.178424 4949 generic.go:334] "Generic (PLEG): container finished" podID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerID="cbdd939af999bcfa3e96fc5079b45623220702fd2cd27bb16bfa120f2fbdfe75" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.178487 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerDied","Data":"cbdd939af999bcfa3e96fc5079b45623220702fd2cd27bb16bfa120f2fbdfe75"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.182597 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-825w7" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.182597 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-825w7" event={"ID":"fec4e3eb-8e0c-4448-bd89-854714f2a98b","Type":"ContainerDied","Data":"68cb870c434ab55233ed72365d1bc78370679ae532604bfe367507e2c57caf3a"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.182649 4949 scope.go:117] "RemoveContainer" containerID="a2b8ec307f31fd4d86ed572c2d2e075dfc695659858ff08db3c5fd1c5540b5f3" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.185499 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-ctk5g" podStartSLOduration=11.185485023 podStartE2EDuration="11.185485023s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.177115314 +0000 UTC m=+955.986946172" watchObservedRunningTime="2026-01-20 15:06:00.185485023 +0000 UTC m=+955.995315881" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.186548 4949 generic.go:334] "Generic (PLEG): container finished" podID="2cffaea4-923f-446d-9df7-7c35332af89d" containerID="182fc5d23cfc8772155fb0ae18fcbb7d700abd47011cd0c4eae8e341dd49f364" exitCode=0 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.188813 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9bkl" event={"ID":"2cffaea4-923f-446d-9df7-7c35332af89d","Type":"ContainerDied","Data":"182fc5d23cfc8772155fb0ae18fcbb7d700abd47011cd0c4eae8e341dd49f364"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.188843 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9bkl" event={"ID":"2cffaea4-923f-446d-9df7-7c35332af89d","Type":"ContainerStarted","Data":"572ce3ba037c78fb1b94d25482070b137d8f2c493c27a0d02a4b8659b34f894c"} Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.189864 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qsqhq" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" containerID="cri-o://cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624" gracePeriod=2 Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.275128 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68d2-account-create-update-7xhv6" podStartSLOduration=10.275109474 podStartE2EDuration="10.275109474s" podCreationTimestamp="2026-01-20 15:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.245631216 +0000 UTC m=+956.055462104" watchObservedRunningTime="2026-01-20 15:06:00.275109474 +0000 UTC m=+956.084940332" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.306860 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-p27hz" podStartSLOduration=3.306834933 podStartE2EDuration="3.306834933s" podCreationTimestamp="2026-01-20 15:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:00.266458146 +0000 UTC m=+956.076289014" watchObservedRunningTime="2026-01-20 15:06:00.306834933 +0000 UTC m=+956.116665801" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.693770 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.702636 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-825w7"] Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.797825 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" path="/var/lib/kubelet/pods/fec4e3eb-8e0c-4448-bd89-854714f2a98b/volumes" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.864357 4949 scope.go:117] "RemoveContainer" containerID="97885d51078adaf7b0201e67e5028b4306ed2924b2eb0990ba98b4acc792105a" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.865318 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.936472 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") pod \"76c41597-7a3e-40c0-91d3-a73771874abe\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.936740 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") pod \"76c41597-7a3e-40c0-91d3-a73771874abe\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.936895 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") pod \"76c41597-7a3e-40c0-91d3-a73771874abe\" (UID: \"76c41597-7a3e-40c0-91d3-a73771874abe\") " Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.942189 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz" (OuterVolumeSpecName: "kube-api-access-bqgbz") pod "76c41597-7a3e-40c0-91d3-a73771874abe" (UID: "76c41597-7a3e-40c0-91d3-a73771874abe"). InnerVolumeSpecName "kube-api-access-bqgbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.967638 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config" (OuterVolumeSpecName: "config") pod "76c41597-7a3e-40c0-91d3-a73771874abe" (UID: "76c41597-7a3e-40c0-91d3-a73771874abe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:00 crc kubenswrapper[4949]: I0120 15:06:00.967791 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76c41597-7a3e-40c0-91d3-a73771874abe" (UID: "76c41597-7a3e-40c0-91d3-a73771874abe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.039025 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.039048 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqgbz\" (UniqueName: \"kubernetes.io/projected/76c41597-7a3e-40c0-91d3-a73771874abe-kube-api-access-bqgbz\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.039060 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c41597-7a3e-40c0-91d3-a73771874abe-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.195265 4949 generic.go:334] "Generic (PLEG): container finished" podID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerID="cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.195328 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.197486 4949 generic.go:334] "Generic (PLEG): container finished" podID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerID="094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.197591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerDied","Data":"094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.199027 4949 generic.go:334] "Generic (PLEG): container finished" podID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerID="75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.199101 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerDied","Data":"75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.201068 4949 generic.go:334] "Generic (PLEG): container finished" podID="5f223041-d962-43d8-81ad-0480ed09ff57" containerID="55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.201210 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerDied","Data":"55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.203322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerStarted","Data":"e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.203422 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.209000 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerStarted","Data":"498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.209680 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.212423 4949 generic.go:334] "Generic (PLEG): container finished" podID="e2993cec-87be-40ef-8f45-51ad7072f115" containerID="dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.212474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerDied","Data":"dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.213710 4949 generic.go:334] "Generic (PLEG): container finished" podID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerID="60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.213752 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerDied","Data":"60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.221506 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.221821 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-w9f28" event={"ID":"76c41597-7a3e-40c0-91d3-a73771874abe","Type":"ContainerDied","Data":"d950ba48da9a4474c682914b60ac656e4bbc027d6cabdaea671da5dd5ca13bbd"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.228052 4949 generic.go:334] "Generic (PLEG): container finished" podID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerID="164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b" exitCode=0 Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.228882 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerDied","Data":"164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b"} Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.268859 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" podStartSLOduration=4.066035379 podStartE2EDuration="12.268843852s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="2026-01-20 15:05:50.859966366 +0000 UTC m=+946.669797234" lastFinishedPulling="2026-01-20 15:05:59.062774859 +0000 UTC m=+954.872605707" observedRunningTime="2026-01-20 15:06:01.267996124 +0000 UTC m=+957.077826992" watchObservedRunningTime="2026-01-20 15:06:01.268843852 +0000 UTC m=+957.078674700" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.289923 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-2vttb" podStartSLOduration=12.289904869 podStartE2EDuration="12.289904869s" podCreationTimestamp="2026-01-20 15:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:01.289727483 +0000 UTC m=+957.099558341" watchObservedRunningTime="2026-01-20 15:06:01.289904869 +0000 UTC m=+957.099735727" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.293128 4949 scope.go:117] "RemoveContainer" containerID="f03665b194c4174cebb25646bd720102812b8ac22b08bd892bcbfae2b602d925" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.356679 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.380550 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-w9f28"] Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.433263 4949 scope.go:117] "RemoveContainer" containerID="0999acc591f05e388476f260970e6ec61337b2d0b65b72617ceb59d4faf4d31f" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.514320 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.543844 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669070 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") pod \"efe70405-ca2b-4d54-9b46-c798b4ff8583\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669142 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") pod \"efe70405-ca2b-4d54-9b46-c798b4ff8583\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669170 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") pod \"efe70405-ca2b-4d54-9b46-c798b4ff8583\" (UID: \"efe70405-ca2b-4d54-9b46-c798b4ff8583\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669283 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") pod \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.669304 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") pod \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\" (UID: \"81d427b9-3122-480c-8b2a-3862cdd2b3e2\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.670473 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81d427b9-3122-480c-8b2a-3862cdd2b3e2" (UID: "81d427b9-3122-480c-8b2a-3862cdd2b3e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.675315 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c" (OuterVolumeSpecName: "kube-api-access-b6b4c") pod "efe70405-ca2b-4d54-9b46-c798b4ff8583" (UID: "efe70405-ca2b-4d54-9b46-c798b4ff8583"). InnerVolumeSpecName "kube-api-access-b6b4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.688721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn" (OuterVolumeSpecName: "kube-api-access-nf5tn") pod "81d427b9-3122-480c-8b2a-3862cdd2b3e2" (UID: "81d427b9-3122-480c-8b2a-3862cdd2b3e2"). InnerVolumeSpecName "kube-api-access-nf5tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.699505 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities" (OuterVolumeSpecName: "utilities") pod "efe70405-ca2b-4d54-9b46-c798b4ff8583" (UID: "efe70405-ca2b-4d54-9b46-c798b4ff8583"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.701468 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.736685 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.746335 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efe70405-ca2b-4d54-9b46-c798b4ff8583" (UID: "efe70405-ca2b-4d54-9b46-c798b4ff8583"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779612 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf5tn\" (UniqueName: \"kubernetes.io/projected/81d427b9-3122-480c-8b2a-3862cdd2b3e2-kube-api-access-nf5tn\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779658 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81d427b9-3122-480c-8b2a-3862cdd2b3e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779671 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779684 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe70405-ca2b-4d54-9b46-c798b4ff8583-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.779697 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6b4c\" (UniqueName: \"kubernetes.io/projected/efe70405-ca2b-4d54-9b46-c798b4ff8583-kube-api-access-b6b4c\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.881271 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") pod \"fa3acdd4-7817-4358-8afb-90399e3fa23f\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.881333 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") pod \"2cffaea4-923f-446d-9df7-7c35332af89d\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.881371 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") pod \"2cffaea4-923f-446d-9df7-7c35332af89d\" (UID: \"2cffaea4-923f-446d-9df7-7c35332af89d\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.882584 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") pod \"fa3acdd4-7817-4358-8afb-90399e3fa23f\" (UID: \"fa3acdd4-7817-4358-8afb-90399e3fa23f\") " Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.882250 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cffaea4-923f-446d-9df7-7c35332af89d" (UID: "2cffaea4-923f-446d-9df7-7c35332af89d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.883830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa3acdd4-7817-4358-8afb-90399e3fa23f" (UID: "fa3acdd4-7817-4358-8afb-90399e3fa23f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.884574 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cffaea4-923f-446d-9df7-7c35332af89d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.884819 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3acdd4-7817-4358-8afb-90399e3fa23f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.886802 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82" (OuterVolumeSpecName: "kube-api-access-fqg82") pod "fa3acdd4-7817-4358-8afb-90399e3fa23f" (UID: "fa3acdd4-7817-4358-8afb-90399e3fa23f"). InnerVolumeSpecName "kube-api-access-fqg82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.886948 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf" (OuterVolumeSpecName: "kube-api-access-f9ghf") pod "2cffaea4-923f-446d-9df7-7c35332af89d" (UID: "2cffaea4-923f-446d-9df7-7c35332af89d"). InnerVolumeSpecName "kube-api-access-f9ghf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.987144 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqg82\" (UniqueName: \"kubernetes.io/projected/fa3acdd4-7817-4358-8afb-90399e3fa23f-kube-api-access-fqg82\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:01 crc kubenswrapper[4949]: I0120 15:06:01.987738 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9ghf\" (UniqueName: \"kubernetes.io/projected/2cffaea4-923f-446d-9df7-7c35332af89d-kube-api-access-f9ghf\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.243748 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6dd3-account-create-update-k72x5" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.243746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6dd3-account-create-update-k72x5" event={"ID":"fa3acdd4-7817-4358-8afb-90399e3fa23f","Type":"ContainerDied","Data":"e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.243885 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e717ca92af354521acfa08cee36225a8ffb108839c12d25df06b686ce9548d" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.246392 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"425d9be8-fa72-4cbe-bcc7-444e46e67a08","Type":"ContainerStarted","Data":"ce3eca8b2ae84d58cfa065da823ae3981882f17ec71ab5d111d3ef4c34b16dbd"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.246418 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"425d9be8-fa72-4cbe-bcc7-444e46e67a08","Type":"ContainerStarted","Data":"463c00c27cc46935ff8763ab4169da4d41be0787d739e82660de6f6fc8fb80f2"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.246451 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.256341 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsqhq" event={"ID":"efe70405-ca2b-4d54-9b46-c798b4ff8583","Type":"ContainerDied","Data":"eab9d0d27080760ff500bf58a4b3feb1f20739c4c3f1a1ace1a9ee6555a301e7"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.256601 4949 scope.go:117] "RemoveContainer" containerID="cbd2ce552b2a0f381677696e8613a9364599ed9091b484b77dbfcdafea3cf624" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.256624 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsqhq" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.264170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zr22v" event={"ID":"81d427b9-3122-480c-8b2a-3862cdd2b3e2","Type":"ContainerDied","Data":"cdf521eae40e69f93d6255ba78fcd958a008dea83db5b21eb57ba5e7b4bb45ee"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.264235 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdf521eae40e69f93d6255ba78fcd958a008dea83db5b21eb57ba5e7b4bb45ee" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.264384 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zr22v" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.277616 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-x9bkl" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.277972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-x9bkl" event={"ID":"2cffaea4-923f-446d-9df7-7c35332af89d","Type":"ContainerDied","Data":"572ce3ba037c78fb1b94d25482070b137d8f2c493c27a0d02a4b8659b34f894c"} Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.279066 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="572ce3ba037c78fb1b94d25482070b137d8f2c493c27a0d02a4b8659b34f894c" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.280369 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=10.008662808 podStartE2EDuration="12.28035583s" podCreationTimestamp="2026-01-20 15:05:50 +0000 UTC" firstStartedPulling="2026-01-20 15:05:59.025460409 +0000 UTC m=+954.835291267" lastFinishedPulling="2026-01-20 15:06:01.297153431 +0000 UTC m=+957.106984289" observedRunningTime="2026-01-20 15:06:02.273988046 +0000 UTC m=+958.083818904" watchObservedRunningTime="2026-01-20 15:06:02.28035583 +0000 UTC m=+958.090186688" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.295730 4949 scope.go:117] "RemoveContainer" containerID="99255b61b7f0088c38e333002cd268cb5398ee7d7f296126fdae25ebda59cb81" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.301627 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.313534 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qsqhq"] Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.334859 4949 scope.go:117] "RemoveContainer" containerID="7116bccdb347550321602be8ab7c8a5038e543ed30d76d1e6cf7ae23a1c0748e" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.651934 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.731578 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.763469 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.768381 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.803746 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" path="/var/lib/kubelet/pods/76c41597-7a3e-40c0-91d3-a73771874abe/volumes" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.804246 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" path="/var/lib/kubelet/pods/efe70405-ca2b-4d54-9b46-c798b4ff8583/volumes" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.809697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") pod \"5f223041-d962-43d8-81ad-0480ed09ff57\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.809850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") pod \"5f223041-d962-43d8-81ad-0480ed09ff57\" (UID: \"5f223041-d962-43d8-81ad-0480ed09ff57\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.810766 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f223041-d962-43d8-81ad-0480ed09ff57" (UID: "5f223041-d962-43d8-81ad-0480ed09ff57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.815616 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2" (OuterVolumeSpecName: "kube-api-access-rjcm2") pod "5f223041-d962-43d8-81ad-0480ed09ff57" (UID: "5f223041-d962-43d8-81ad-0480ed09ff57"). InnerVolumeSpecName "kube-api-access-rjcm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910769 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") pod \"e2993cec-87be-40ef-8f45-51ad7072f115\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910865 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") pod \"625a0372-8b33-45fa-ad97-ad8e362be0fb\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910921 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") pod \"625a0372-8b33-45fa-ad97-ad8e362be0fb\" (UID: \"625a0372-8b33-45fa-ad97-ad8e362be0fb\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.910992 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") pod \"e2993cec-87be-40ef-8f45-51ad7072f115\" (UID: \"e2993cec-87be-40ef-8f45-51ad7072f115\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") pod \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911146 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") pod \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\" (UID: \"9d401b2e-722b-48cc-b8c4-19ffed9f43b8\") " Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911574 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjcm2\" (UniqueName: \"kubernetes.io/projected/5f223041-d962-43d8-81ad-0480ed09ff57-kube-api-access-rjcm2\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911599 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f223041-d962-43d8-81ad-0480ed09ff57-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911628 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2993cec-87be-40ef-8f45-51ad7072f115" (UID: "e2993cec-87be-40ef-8f45-51ad7072f115"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.911646 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "625a0372-8b33-45fa-ad97-ad8e362be0fb" (UID: "625a0372-8b33-45fa-ad97-ad8e362be0fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.912306 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d401b2e-722b-48cc-b8c4-19ffed9f43b8" (UID: "9d401b2e-722b-48cc-b8c4-19ffed9f43b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.915096 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c" (OuterVolumeSpecName: "kube-api-access-pzh6c") pod "e2993cec-87be-40ef-8f45-51ad7072f115" (UID: "e2993cec-87be-40ef-8f45-51ad7072f115"). InnerVolumeSpecName "kube-api-access-pzh6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.915110 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l" (OuterVolumeSpecName: "kube-api-access-mjw8l") pod "625a0372-8b33-45fa-ad97-ad8e362be0fb" (UID: "625a0372-8b33-45fa-ad97-ad8e362be0fb"). InnerVolumeSpecName "kube-api-access-mjw8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:02 crc kubenswrapper[4949]: I0120 15:06:02.915137 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr" (OuterVolumeSpecName: "kube-api-access-z9jxr") pod "9d401b2e-722b-48cc-b8c4-19ffed9f43b8" (UID: "9d401b2e-722b-48cc-b8c4-19ffed9f43b8"). InnerVolumeSpecName "kube-api-access-z9jxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012913 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2993cec-87be-40ef-8f45-51ad7072f115-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012944 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9jxr\" (UniqueName: \"kubernetes.io/projected/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-kube-api-access-z9jxr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012955 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d401b2e-722b-48cc-b8c4-19ffed9f43b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012966 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzh6c\" (UniqueName: \"kubernetes.io/projected/e2993cec-87be-40ef-8f45-51ad7072f115-kube-api-access-pzh6c\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012977 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjw8l\" (UniqueName: \"kubernetes.io/projected/625a0372-8b33-45fa-ad97-ad8e362be0fb-kube-api-access-mjw8l\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.012987 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/625a0372-8b33-45fa-ad97-ad8e362be0fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.286931 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ctk5g" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.286925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ctk5g" event={"ID":"625a0372-8b33-45fa-ad97-ad8e362be0fb","Type":"ContainerDied","Data":"a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.287425 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a298ae875c8db41bd8657fef67b7cdb46985d4125ad8b38e71d613a460475d18" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.290151 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p27hz" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.290136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p27hz" event={"ID":"9d401b2e-722b-48cc-b8c4-19ffed9f43b8","Type":"ContainerDied","Data":"1d78e596c088c5f26c8586bee94d254159d14ebd3299015b971a3417bb01e379"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.290247 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d78e596c088c5f26c8586bee94d254159d14ebd3299015b971a3417bb01e379" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.291488 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31fc-account-create-update-cvjjl" event={"ID":"e2993cec-87be-40ef-8f45-51ad7072f115","Type":"ContainerDied","Data":"69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.291547 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f25246c98b2c30efd0c5de2aba612b99439ac449efeb635ae5bfcc6c208629" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.291588 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31fc-account-create-update-cvjjl" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.296055 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68d2-account-create-update-7xhv6" event={"ID":"5f223041-d962-43d8-81ad-0480ed09ff57","Type":"ContainerDied","Data":"1154256b44f2049cb5a2d456438d141ab6e6260d36590284bfd2b45c26eb8830"} Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.296090 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1154256b44f2049cb5a2d456438d141ab6e6260d36590284bfd2b45c26eb8830" Jan 20 15:06:03 crc kubenswrapper[4949]: I0120 15:06:03.296120 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68d2-account-create-update-7xhv6" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.613463 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614388 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614407 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614429 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614436 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614449 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614457 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614466 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614473 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614484 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614492 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614507 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614534 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-content" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614546 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614552 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614561 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614568 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614585 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614591 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614602 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614609 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614619 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" containerName="init" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614627 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" containerName="init" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614641 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614649 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="extract-utilities" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614664 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614671 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: E0120 15:06:05.614684 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614691 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614888 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec4e3eb-8e0c-4448-bd89-854714f2a98b" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614905 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614920 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe70405-ca2b-4d54-9b46-c798b4ff8583" containerName="registry-server" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614927 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614941 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614950 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614965 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614976 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c41597-7a3e-40c0-91d3-a73771874abe" containerName="init" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614987 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" containerName="mariadb-database-create" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.614997 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" containerName="mariadb-account-create-update" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.615653 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.618741 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.619557 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-csksn" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.627155 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759613 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.759990 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861935 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861959 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.861983 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.868766 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.869247 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.869430 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.884423 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"glance-db-sync-48l6g\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.889720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.933286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.948190 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:06:05 crc kubenswrapper[4949]: I0120 15:06:05.948481 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" containerID="cri-o://498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686" gracePeriod=10 Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.325837 4949 generic.go:334] "Generic (PLEG): container finished" podID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerID="498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686" exitCode=0 Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.325928 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerDied","Data":"498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686"} Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.533822 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584232 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584344 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584386 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.584457 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") pod \"23edc910-bec7-4375-a48e-69abb1c9c3f2\" (UID: \"23edc910-bec7-4375-a48e-69abb1c9c3f2\") " Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.591802 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr" (OuterVolumeSpecName: "kube-api-access-2vnmr") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "kube-api-access-2vnmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.603305 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:06:06 crc kubenswrapper[4949]: W0120 15:06:06.613136 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc607bb7c_569c_4da2_b6bf_5b6c9b5c041e.slice/crio-1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae WatchSource:0}: Error finding container 1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae: Status 404 returned error can't find the container with id 1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.633366 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.634620 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.656211 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config" (OuterVolumeSpecName: "config") pod "23edc910-bec7-4375-a48e-69abb1c9c3f2" (UID: "23edc910-bec7-4375-a48e-69abb1c9c3f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685897 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685928 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685936 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23edc910-bec7-4375-a48e-69abb1c9c3f2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:06 crc kubenswrapper[4949]: I0120 15:06:06.685947 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vnmr\" (UniqueName: \"kubernetes.io/projected/23edc910-bec7-4375-a48e-69abb1c9c3f2-kube-api-access-2vnmr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.332797 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerStarted","Data":"1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae"} Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.334972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" event={"ID":"23edc910-bec7-4375-a48e-69abb1c9c3f2","Type":"ContainerDied","Data":"6996c0b6103b18456eb99c9a9d46337d5c6171dee7a722eba0c900e6409fff97"} Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.335023 4949 scope.go:117] "RemoveContainer" containerID="498b381a428ae77e290c68955732d6e1196bbe8b0871bdace4fa8bac83d35686" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.335169 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p8gng" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.355433 4949 scope.go:117] "RemoveContainer" containerID="cbdd939af999bcfa3e96fc5079b45623220702fd2cd27bb16bfa120f2fbdfe75" Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.362358 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:06:07 crc kubenswrapper[4949]: I0120 15:06:07.369047 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p8gng"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.396264 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.402616 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p27hz"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422015 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:06:08 crc kubenswrapper[4949]: E0120 15:06:08.422382 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422401 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" Jan 20 15:06:08 crc kubenswrapper[4949]: E0120 15:06:08.422420 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="init" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422428 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="init" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.422636 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" containerName="dnsmasq-dns" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.423345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.425629 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.428941 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.521116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.521262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.622752 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.622828 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.623761 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.652602 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"root-account-create-update-btxws\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.782890 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.801656 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23edc910-bec7-4375-a48e-69abb1c9c3f2" path="/var/lib/kubelet/pods/23edc910-bec7-4375-a48e-69abb1c9c3f2/volumes" Jan 20 15:06:08 crc kubenswrapper[4949]: I0120 15:06:08.802863 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d401b2e-722b-48cc-b8c4-19ffed9f43b8" path="/var/lib/kubelet/pods/9d401b2e-722b-48cc-b8c4-19ffed9f43b8/volumes" Jan 20 15:06:09 crc kubenswrapper[4949]: W0120 15:06:09.225315 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcafb93d7_a006_4cd2_99bd_e21022a5078f.slice/crio-9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80 WatchSource:0}: Error finding container 9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80: Status 404 returned error can't find the container with id 9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80 Jan 20 15:06:09 crc kubenswrapper[4949]: I0120 15:06:09.229488 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:06:09 crc kubenswrapper[4949]: I0120 15:06:09.352066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btxws" event={"ID":"cafb93d7-a006-4cd2-99bd-e21022a5078f","Type":"ContainerStarted","Data":"9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80"} Jan 20 15:06:10 crc kubenswrapper[4949]: I0120 15:06:10.361084 4949 generic.go:334] "Generic (PLEG): container finished" podID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerID="29003c0194acb9afdeb9e8174b3f33c4656b98673fb67369661844d652a26c45" exitCode=0 Jan 20 15:06:10 crc kubenswrapper[4949]: I0120 15:06:10.361169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btxws" event={"ID":"cafb93d7-a006-4cd2-99bd-e21022a5078f","Type":"ContainerDied","Data":"29003c0194acb9afdeb9e8174b3f33c4656b98673fb67369661844d652a26c45"} Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.370322 4949 generic.go:334] "Generic (PLEG): container finished" podID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" exitCode=0 Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.370352 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerDied","Data":"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131"} Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.671255 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.776455 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") pod \"cafb93d7-a006-4cd2-99bd-e21022a5078f\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.776625 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") pod \"cafb93d7-a006-4cd2-99bd-e21022a5078f\" (UID: \"cafb93d7-a006-4cd2-99bd-e21022a5078f\") " Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.777120 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cafb93d7-a006-4cd2-99bd-e21022a5078f" (UID: "cafb93d7-a006-4cd2-99bd-e21022a5078f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.780499 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr" (OuterVolumeSpecName: "kube-api-access-4lhdr") pod "cafb93d7-a006-4cd2-99bd-e21022a5078f" (UID: "cafb93d7-a006-4cd2-99bd-e21022a5078f"). InnerVolumeSpecName "kube-api-access-4lhdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.878016 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lhdr\" (UniqueName: \"kubernetes.io/projected/cafb93d7-a006-4cd2-99bd-e21022a5078f-kube-api-access-4lhdr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:11 crc kubenswrapper[4949]: I0120 15:06:11.878054 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cafb93d7-a006-4cd2-99bd-e21022a5078f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.381993 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerStarted","Data":"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d"} Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.382321 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.384350 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btxws" event={"ID":"cafb93d7-a006-4cd2-99bd-e21022a5078f","Type":"ContainerDied","Data":"9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80"} Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.384379 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a7d0abcf7b2963deb8a16f6f1edd7cbf87958dff1fdb953b2031741dd5b4a80" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.384428 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btxws" Jan 20 15:06:12 crc kubenswrapper[4949]: I0120 15:06:12.714235 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.251219672 podStartE2EDuration="57.714209128s" podCreationTimestamp="2026-01-20 15:05:15 +0000 UTC" firstStartedPulling="2026-01-20 15:05:17.694850459 +0000 UTC m=+913.504681317" lastFinishedPulling="2026-01-20 15:05:37.157839915 +0000 UTC m=+932.967670773" observedRunningTime="2026-01-20 15:06:12.412324743 +0000 UTC m=+968.222155591" watchObservedRunningTime="2026-01-20 15:06:12.714209128 +0000 UTC m=+968.524039996" Jan 20 15:06:13 crc kubenswrapper[4949]: I0120 15:06:13.397683 4949 generic.go:334] "Generic (PLEG): container finished" podID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" exitCode=0 Jan 20 15:06:13 crc kubenswrapper[4949]: I0120 15:06:13.397759 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerDied","Data":"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c"} Jan 20 15:06:15 crc kubenswrapper[4949]: I0120 15:06:15.586971 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.261195 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nqhh2" podUID="c4179fca-4378-4347-a519-96120d9ae1cc" containerName="ovn-controller" probeResult="failure" output=< Jan 20 15:06:16 crc kubenswrapper[4949]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 20 15:06:16 crc kubenswrapper[4949]: > Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.278189 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.295149 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kbnxn" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.495339 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:16 crc kubenswrapper[4949]: E0120 15:06:16.495630 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerName="mariadb-account-create-update" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.495646 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerName="mariadb-account-create-update" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.495821 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" containerName="mariadb-account-create-update" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.496300 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.499371 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.511043 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.664827 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.664913 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665115 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665365 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.665556 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767750 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767816 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767919 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767925 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767864 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.767980 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.768678 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.770361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.789025 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"ovn-controller-nqhh2-config-wp2s8\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:16 crc kubenswrapper[4949]: I0120 15:06:16.823898 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.153962 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:19 crc kubenswrapper[4949]: W0120 15:06:19.168408 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod430c67d6_52aa_4386_8403_7be27bbe3abf.slice/crio-dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996 WatchSource:0}: Error finding container dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996: Status 404 returned error can't find the container with id dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996 Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.444810 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerStarted","Data":"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281"} Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.446169 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.447626 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2-config-wp2s8" event={"ID":"430c67d6-52aa-4386-8403-7be27bbe3abf","Type":"ContainerStarted","Data":"dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996"} Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.470083 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.833478286 podStartE2EDuration="1m3.470066173s" podCreationTimestamp="2026-01-20 15:05:16 +0000 UTC" firstStartedPulling="2026-01-20 15:05:17.915433611 +0000 UTC m=+913.725264469" lastFinishedPulling="2026-01-20 15:05:37.552021498 +0000 UTC m=+933.361852356" observedRunningTime="2026-01-20 15:06:19.468029528 +0000 UTC m=+975.277860396" watchObservedRunningTime="2026-01-20 15:06:19.470066173 +0000 UTC m=+975.279897031" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.584147 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.585823 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.596383 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.731320 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.731406 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.731474 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.833266 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.833629 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.833695 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.834136 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.834167 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.864338 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"redhat-marketplace-fpdcn\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:19 crc kubenswrapper[4949]: I0120 15:06:19.904366 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.333277 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.456021 4949 generic.go:334] "Generic (PLEG): container finished" podID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerID="a88c0c9a85129d9d6ee8562e849b80140bdaffa17c443b17a4de9fabf84ee113" exitCode=0 Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.456335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2-config-wp2s8" event={"ID":"430c67d6-52aa-4386-8403-7be27bbe3abf","Type":"ContainerDied","Data":"a88c0c9a85129d9d6ee8562e849b80140bdaffa17c443b17a4de9fabf84ee113"} Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.457103 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerStarted","Data":"be1513768022a765d8528f5739925bf8c2f745a2c54942f090b5b47b0cc445fd"} Jan 20 15:06:20 crc kubenswrapper[4949]: I0120 15:06:20.458585 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerStarted","Data":"3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4"} Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.259708 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nqhh2" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.281338 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-48l6g" podStartSLOduration=3.956552293 podStartE2EDuration="16.281315073s" podCreationTimestamp="2026-01-20 15:06:05 +0000 UTC" firstStartedPulling="2026-01-20 15:06:06.616428681 +0000 UTC m=+962.426259539" lastFinishedPulling="2026-01-20 15:06:18.941191461 +0000 UTC m=+974.751022319" observedRunningTime="2026-01-20 15:06:20.515681019 +0000 UTC m=+976.325511897" watchObservedRunningTime="2026-01-20 15:06:21.281315073 +0000 UTC m=+977.091145931" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.467345 4949 generic.go:334] "Generic (PLEG): container finished" podID="c22e6b14-e94a-4bb0-a034-60c355928551" containerID="91e2413b0353cd08abc8762744ba059b7a151348ee4608eb1c7a3ef0b3f6a658" exitCode=0 Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.467396 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"91e2413b0353cd08abc8762744ba059b7a151348ee4608eb1c7a3ef0b3f6a658"} Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.759650 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.861801 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862094 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862148 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862216 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862458 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862486 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862547 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862508 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run" (OuterVolumeSpecName: "var-run") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.862573 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") pod \"430c67d6-52aa-4386-8403-7be27bbe3abf\" (UID: \"430c67d6-52aa-4386-8403-7be27bbe3abf\") " Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863152 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863611 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts" (OuterVolumeSpecName: "scripts") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863712 4949 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863743 4949 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863762 4949 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.863779 4949 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/430c67d6-52aa-4386-8403-7be27bbe3abf-var-run\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.870660 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq" (OuterVolumeSpecName: "kube-api-access-dkgxq") pod "430c67d6-52aa-4386-8403-7be27bbe3abf" (UID: "430c67d6-52aa-4386-8403-7be27bbe3abf"). InnerVolumeSpecName "kube-api-access-dkgxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.965990 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkgxq\" (UniqueName: \"kubernetes.io/projected/430c67d6-52aa-4386-8403-7be27bbe3abf-kube-api-access-dkgxq\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:21 crc kubenswrapper[4949]: I0120 15:06:21.966049 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/430c67d6-52aa-4386-8403-7be27bbe3abf-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.479155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nqhh2-config-wp2s8" event={"ID":"430c67d6-52aa-4386-8403-7be27bbe3abf","Type":"ContainerDied","Data":"dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996"} Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.479201 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd8f9035b0816f5f9410c15b72a0d604a1fd3f98733f80750a74e5cfa2aad996" Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.479233 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nqhh2-config-wp2s8" Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.859093 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:22 crc kubenswrapper[4949]: I0120 15:06:22.866117 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nqhh2-config-wp2s8"] Jan 20 15:06:23 crc kubenswrapper[4949]: I0120 15:06:23.487048 4949 generic.go:334] "Generic (PLEG): container finished" podID="c22e6b14-e94a-4bb0-a034-60c355928551" containerID="df65c94aa13ba8c6b5396fc92732fe87ae0f8f77c763f304cef724603233c87d" exitCode=0 Jan 20 15:06:23 crc kubenswrapper[4949]: I0120 15:06:23.487096 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"df65c94aa13ba8c6b5396fc92732fe87ae0f8f77c763f304cef724603233c87d"} Jan 20 15:06:24 crc kubenswrapper[4949]: I0120 15:06:24.800438 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" path="/var/lib/kubelet/pods/430c67d6-52aa-4386-8403-7be27bbe3abf/volumes" Jan 20 15:06:26 crc kubenswrapper[4949]: I0120 15:06:26.518287 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerStarted","Data":"eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0"} Jan 20 15:06:26 crc kubenswrapper[4949]: I0120 15:06:26.539438 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fpdcn" podStartSLOduration=3.304784367 podStartE2EDuration="7.539421856s" podCreationTimestamp="2026-01-20 15:06:19 +0000 UTC" firstStartedPulling="2026-01-20 15:06:21.469250925 +0000 UTC m=+977.279081783" lastFinishedPulling="2026-01-20 15:06:25.703888404 +0000 UTC m=+981.513719272" observedRunningTime="2026-01-20 15:06:26.534418865 +0000 UTC m=+982.344249723" watchObservedRunningTime="2026-01-20 15:06:26.539421856 +0000 UTC m=+982.349252734" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.162299 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.534946 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:06:27 crc kubenswrapper[4949]: E0120 15:06:27.535246 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerName="ovn-config" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.535258 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerName="ovn-config" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.535415 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="430c67d6-52aa-4386-8403-7be27bbe3abf" containerName="ovn-config" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.535902 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.545668 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.557508 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.558999 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.560415 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.584799 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.639908 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.641112 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.649623 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653301 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653399 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653473 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.653570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.744461 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.745606 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.748310 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.754855 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.754923 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.754969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755037 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755069 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755608 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.755837 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.757913 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.776051 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"barbican-7a79-account-create-update-zrwtk\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.779356 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"barbican-db-create-tdr7p\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.819921 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.821024 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.830131 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.830410 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.832914 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.838996 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.843778 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.852110 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.852467 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.853505 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857387 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.857440 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.858235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.867501 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.878813 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"cinder-db-create-zd7sx\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.890237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.963676 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.964908 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.965681 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.969945 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971068 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971118 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971155 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971186 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971207 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971228 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.971264 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.975707 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:27 crc kubenswrapper[4949]: I0120 15:06:27.978294 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.007004 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"cinder-9aa3-account-create-update-jbv24\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.063212 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087352 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.087536 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.101128 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.101366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.101428 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.104326 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.121272 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.123249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.151628 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"keystone-db-sync-kp4rp\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.162531 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"neutron-db-create-4b86v\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.210589 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.210741 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.211908 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.238120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"neutron-f894-account-create-update-zl66h\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.334993 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.370673 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.384269 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.406857 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.471919 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.548684 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7a79-account-create-update-zrwtk" event={"ID":"de0efbc8-5060-4336-85af-23b901dd02fe","Type":"ContainerStarted","Data":"d94e7ded5171c9b3a8c47eac006dc9d7444a0d8d46e0ce93cdf09a52a174763c"} Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.569909 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdr7p" event={"ID":"900c89f3-a834-4a95-88cf-b6fda3fc9c58","Type":"ContainerStarted","Data":"4ffd101a8f2483091a302d6687d11f156dca57ac5e844250f312b474bda801d7"} Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.855262 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.909482 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.934153 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:06:28 crc kubenswrapper[4949]: W0120 15:06:28.946557 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2114c9bc_9691_4d96_8541_28ec5473428a.slice/crio-5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7 WatchSource:0}: Error finding container 5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7: Status 404 returned error can't find the container with id 5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7 Jan 20 15:06:28 crc kubenswrapper[4949]: I0120 15:06:28.993109 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.002642 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:06:29 crc kubenswrapper[4949]: W0120 15:06:29.036268 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e8050e_32dc_4014_9bc7_cd06d127eb38.slice/crio-00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1 WatchSource:0}: Error finding container 00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1: Status 404 returned error can't find the container with id 00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1 Jan 20 15:06:29 crc kubenswrapper[4949]: W0120 15:06:29.065430 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5b36c38_4cb3_43d1_ade8_a1e554264870.slice/crio-6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51 WatchSource:0}: Error finding container 6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51: Status 404 returned error can't find the container with id 6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.578751 4949 generic.go:334] "Generic (PLEG): container finished" podID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerID="a781bfdfd8762ae5e24e9222dfc90fa11c886930c4dbb418962538438aae1ac6" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.578849 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f894-account-create-update-zl66h" event={"ID":"6cbefde7-e737-4f29-9093-afc47f438c4c","Type":"ContainerDied","Data":"a781bfdfd8762ae5e24e9222dfc90fa11c886930c4dbb418962538438aae1ac6"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.578895 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f894-account-create-update-zl66h" event={"ID":"6cbefde7-e737-4f29-9093-afc47f438c4c","Type":"ContainerStarted","Data":"3fd734e70bf867cec3cedefb80ef4c42eb46641292e23913e6398e2e9904453f"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.582034 4949 generic.go:334] "Generic (PLEG): container finished" podID="de0efbc8-5060-4336-85af-23b901dd02fe" containerID="e4c82d229c717e5c0ffde6b9f00c036b0384157d1d756dd1f0e6b2ffaf868b06" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.582095 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7a79-account-create-update-zrwtk" event={"ID":"de0efbc8-5060-4336-85af-23b901dd02fe","Type":"ContainerDied","Data":"e4c82d229c717e5c0ffde6b9f00c036b0384157d1d756dd1f0e6b2ffaf868b06"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.583353 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerStarted","Data":"00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.585622 4949 generic.go:334] "Generic (PLEG): container finished" podID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerID="1b29787e73d44fce82b44b4dc092f944512be0b9918fd3a1f7b95398ec00eb0f" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.585680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9aa3-account-create-update-jbv24" event={"ID":"b57b3d7e-755f-43d2-aab3-f6d68a062a37","Type":"ContainerDied","Data":"1b29787e73d44fce82b44b4dc092f944512be0b9918fd3a1f7b95398ec00eb0f"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.585700 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9aa3-account-create-update-jbv24" event={"ID":"b57b3d7e-755f-43d2-aab3-f6d68a062a37","Type":"ContainerStarted","Data":"f03ba0db3ab62600d49d0d55b337a3e757a50a3b55e6815f98b4e6cacb6331e7"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.586805 4949 generic.go:334] "Generic (PLEG): container finished" podID="2114c9bc-9691-4d96-8541-28ec5473428a" containerID="c597795c21e284cf8447b4c1ba489d0c9f85fbd9dd3ef4fe3d4ba5bb6bd98cfb" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.586852 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd7sx" event={"ID":"2114c9bc-9691-4d96-8541-28ec5473428a","Type":"ContainerDied","Data":"c597795c21e284cf8447b4c1ba489d0c9f85fbd9dd3ef4fe3d4ba5bb6bd98cfb"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.586870 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd7sx" event={"ID":"2114c9bc-9691-4d96-8541-28ec5473428a","Type":"ContainerStarted","Data":"5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.587997 4949 generic.go:334] "Generic (PLEG): container finished" podID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerID="16aca3788ba46fca2c3a4e2db01394682bdf190975c465ad5615866366e0a008" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.588045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdr7p" event={"ID":"900c89f3-a834-4a95-88cf-b6fda3fc9c58","Type":"ContainerDied","Data":"16aca3788ba46fca2c3a4e2db01394682bdf190975c465ad5615866366e0a008"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.589071 4949 generic.go:334] "Generic (PLEG): container finished" podID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerID="d629aa6c999c4680b1c85169158551de91f7a34a4f27afe1607eb228257fc70c" exitCode=0 Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.589105 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4b86v" event={"ID":"c5b36c38-4cb3-43d1-ade8-a1e554264870","Type":"ContainerDied","Data":"d629aa6c999c4680b1c85169158551de91f7a34a4f27afe1607eb228257fc70c"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.589120 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4b86v" event={"ID":"c5b36c38-4cb3-43d1-ade8-a1e554264870","Type":"ContainerStarted","Data":"6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51"} Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.905432 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.905837 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:29 crc kubenswrapper[4949]: I0120 15:06:29.951763 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:30 crc kubenswrapper[4949]: I0120 15:06:30.657181 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:30 crc kubenswrapper[4949]: I0120 15:06:30.735983 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.008162 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.160215 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") pod \"2114c9bc-9691-4d96-8541-28ec5473428a\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.160428 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") pod \"2114c9bc-9691-4d96-8541-28ec5473428a\" (UID: \"2114c9bc-9691-4d96-8541-28ec5473428a\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.161124 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2114c9bc-9691-4d96-8541-28ec5473428a" (UID: "2114c9bc-9691-4d96-8541-28ec5473428a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.166071 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4" (OuterVolumeSpecName: "kube-api-access-x77d4") pod "2114c9bc-9691-4d96-8541-28ec5473428a" (UID: "2114c9bc-9691-4d96-8541-28ec5473428a"). InnerVolumeSpecName "kube-api-access-x77d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.246357 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.252825 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.260962 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.262036 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2114c9bc-9691-4d96-8541-28ec5473428a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.262060 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77d4\" (UniqueName: \"kubernetes.io/projected/2114c9bc-9691-4d96-8541-28ec5473428a-kube-api-access-x77d4\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.270092 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.274563 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362705 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") pod \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362755 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") pod \"6cbefde7-e737-4f29-9093-afc47f438c4c\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362785 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") pod \"de0efbc8-5060-4336-85af-23b901dd02fe\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362821 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") pod \"de0efbc8-5060-4336-85af-23b901dd02fe\" (UID: \"de0efbc8-5060-4336-85af-23b901dd02fe\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") pod \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\" (UID: \"b57b3d7e-755f-43d2-aab3-f6d68a062a37\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362934 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") pod \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.362950 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") pod \"6cbefde7-e737-4f29-9093-afc47f438c4c\" (UID: \"6cbefde7-e737-4f29-9093-afc47f438c4c\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363004 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") pod \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\" (UID: \"900c89f3-a834-4a95-88cf-b6fda3fc9c58\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363027 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") pod \"c5b36c38-4cb3-43d1-ade8-a1e554264870\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363046 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") pod \"c5b36c38-4cb3-43d1-ade8-a1e554264870\" (UID: \"c5b36c38-4cb3-43d1-ade8-a1e554264870\") " Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de0efbc8-5060-4336-85af-23b901dd02fe" (UID: "de0efbc8-5060-4336-85af-23b901dd02fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.363874 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cbefde7-e737-4f29-9093-afc47f438c4c" (UID: "6cbefde7-e737-4f29-9093-afc47f438c4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.364005 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "900c89f3-a834-4a95-88cf-b6fda3fc9c58" (UID: "900c89f3-a834-4a95-88cf-b6fda3fc9c58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.364234 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5b36c38-4cb3-43d1-ade8-a1e554264870" (UID: "c5b36c38-4cb3-43d1-ade8-a1e554264870"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.366777 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7" (OuterVolumeSpecName: "kube-api-access-7zpf7") pod "6cbefde7-e737-4f29-9093-afc47f438c4c" (UID: "6cbefde7-e737-4f29-9093-afc47f438c4c"). InnerVolumeSpecName "kube-api-access-7zpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.366815 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z" (OuterVolumeSpecName: "kube-api-access-4577z") pod "de0efbc8-5060-4336-85af-23b901dd02fe" (UID: "de0efbc8-5060-4336-85af-23b901dd02fe"). InnerVolumeSpecName "kube-api-access-4577z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.367272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b" (OuterVolumeSpecName: "kube-api-access-56t9b") pod "c5b36c38-4cb3-43d1-ade8-a1e554264870" (UID: "c5b36c38-4cb3-43d1-ade8-a1e554264870"). InnerVolumeSpecName "kube-api-access-56t9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.368014 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4" (OuterVolumeSpecName: "kube-api-access-q9zf4") pod "b57b3d7e-755f-43d2-aab3-f6d68a062a37" (UID: "b57b3d7e-755f-43d2-aab3-f6d68a062a37"). InnerVolumeSpecName "kube-api-access-q9zf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.368728 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt" (OuterVolumeSpecName: "kube-api-access-jfcwt") pod "900c89f3-a834-4a95-88cf-b6fda3fc9c58" (UID: "900c89f3-a834-4a95-88cf-b6fda3fc9c58"). InnerVolumeSpecName "kube-api-access-jfcwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465066 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbefde7-e737-4f29-9093-afc47f438c4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465106 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfcwt\" (UniqueName: \"kubernetes.io/projected/900c89f3-a834-4a95-88cf-b6fda3fc9c58-kube-api-access-jfcwt\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465121 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/900c89f3-a834-4a95-88cf-b6fda3fc9c58-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465130 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b36c38-4cb3-43d1-ade8-a1e554264870-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465141 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56t9b\" (UniqueName: \"kubernetes.io/projected/c5b36c38-4cb3-43d1-ade8-a1e554264870-kube-api-access-56t9b\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465151 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zpf7\" (UniqueName: \"kubernetes.io/projected/6cbefde7-e737-4f29-9093-afc47f438c4c-kube-api-access-7zpf7\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465160 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4577z\" (UniqueName: \"kubernetes.io/projected/de0efbc8-5060-4336-85af-23b901dd02fe-kube-api-access-4577z\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465170 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0efbc8-5060-4336-85af-23b901dd02fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.465181 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9zf4\" (UniqueName: \"kubernetes.io/projected/b57b3d7e-755f-43d2-aab3-f6d68a062a37-kube-api-access-q9zf4\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.524932 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b57b3d7e-755f-43d2-aab3-f6d68a062a37" (UID: "b57b3d7e-755f-43d2-aab3-f6d68a062a37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.566184 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57b3d7e-755f-43d2-aab3-f6d68a062a37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.606110 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f894-account-create-update-zl66h" event={"ID":"6cbefde7-e737-4f29-9093-afc47f438c4c","Type":"ContainerDied","Data":"3fd734e70bf867cec3cedefb80ef4c42eb46641292e23913e6398e2e9904453f"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.606146 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f894-account-create-update-zl66h" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.606159 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd734e70bf867cec3cedefb80ef4c42eb46641292e23913e6398e2e9904453f" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.607308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7a79-account-create-update-zrwtk" event={"ID":"de0efbc8-5060-4336-85af-23b901dd02fe","Type":"ContainerDied","Data":"d94e7ded5171c9b3a8c47eac006dc9d7444a0d8d46e0ce93cdf09a52a174763c"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.607327 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7a79-account-create-update-zrwtk" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.607339 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d94e7ded5171c9b3a8c47eac006dc9d7444a0d8d46e0ce93cdf09a52a174763c" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.621702 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9aa3-account-create-update-jbv24" event={"ID":"b57b3d7e-755f-43d2-aab3-f6d68a062a37","Type":"ContainerDied","Data":"f03ba0db3ab62600d49d0d55b337a3e757a50a3b55e6815f98b4e6cacb6331e7"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.621742 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f03ba0db3ab62600d49d0d55b337a3e757a50a3b55e6815f98b4e6cacb6331e7" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.621718 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9aa3-account-create-update-jbv24" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.624123 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zd7sx" event={"ID":"2114c9bc-9691-4d96-8541-28ec5473428a","Type":"ContainerDied","Data":"5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.624145 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zd7sx" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.624165 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5805c8d3b1617cb114fd03c1b58a11aabee02d968e370afd068587de164137e7" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.626067 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tdr7p" event={"ID":"900c89f3-a834-4a95-88cf-b6fda3fc9c58","Type":"ContainerDied","Data":"4ffd101a8f2483091a302d6687d11f156dca57ac5e844250f312b474bda801d7"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.626102 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffd101a8f2483091a302d6687d11f156dca57ac5e844250f312b474bda801d7" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.626162 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tdr7p" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.636243 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4b86v" Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.636854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4b86v" event={"ID":"c5b36c38-4cb3-43d1-ade8-a1e554264870","Type":"ContainerDied","Data":"6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51"} Jan 20 15:06:31 crc kubenswrapper[4949]: I0120 15:06:31.636886 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccae75ba6406689cc7cc144cdab9d75cfe63c34fceb0b559a82e213796edc51" Jan 20 15:06:32 crc kubenswrapper[4949]: I0120 15:06:32.654436 4949 generic.go:334] "Generic (PLEG): container finished" podID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerID="3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4" exitCode=0 Jan 20 15:06:32 crc kubenswrapper[4949]: I0120 15:06:32.654967 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fpdcn" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" containerID="cri-o://eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0" gracePeriod=2 Jan 20 15:06:32 crc kubenswrapper[4949]: I0120 15:06:32.655044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerDied","Data":"3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4"} Jan 20 15:06:33 crc kubenswrapper[4949]: I0120 15:06:33.675406 4949 generic.go:334] "Generic (PLEG): container finished" podID="c22e6b14-e94a-4bb0-a034-60c355928551" containerID="eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0" exitCode=0 Jan 20 15:06:33 crc kubenswrapper[4949]: I0120 15:06:33.675473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0"} Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.691204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-48l6g" event={"ID":"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e","Type":"ContainerDied","Data":"1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae"} Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.691615 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1934c18b70d541453fcd80efb6cdf6f425b53ab9bcfc04e9c70ea1fba6bfe7ae" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.777333 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.860316 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922416 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") pod \"c22e6b14-e94a-4bb0-a034-60c355928551\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922497 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") pod \"c22e6b14-e94a-4bb0-a034-60c355928551\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922544 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922667 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922744 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922809 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") pod \"c22e6b14-e94a-4bb0-a034-60c355928551\" (UID: \"c22e6b14-e94a-4bb0-a034-60c355928551\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.922834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") pod \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\" (UID: \"c607bb7c-569c-4da2-b6bf-5b6c9b5c041e\") " Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.924611 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities" (OuterVolumeSpecName: "utilities") pod "c22e6b14-e94a-4bb0-a034-60c355928551" (UID: "c22e6b14-e94a-4bb0-a034-60c355928551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.927499 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd" (OuterVolumeSpecName: "kube-api-access-bxjpd") pod "c22e6b14-e94a-4bb0-a034-60c355928551" (UID: "c22e6b14-e94a-4bb0-a034-60c355928551"). InnerVolumeSpecName "kube-api-access-bxjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.927613 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.927965 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr" (OuterVolumeSpecName: "kube-api-access-fb2kr") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "kube-api-access-fb2kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.944322 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c22e6b14-e94a-4bb0-a034-60c355928551" (UID: "c22e6b14-e94a-4bb0-a034-60c355928551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.950047 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:34 crc kubenswrapper[4949]: I0120 15:06:34.961871 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data" (OuterVolumeSpecName: "config-data") pod "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" (UID: "c607bb7c-569c-4da2-b6bf-5b6c9b5c041e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026596 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026630 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026645 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjpd\" (UniqueName: \"kubernetes.io/projected/c22e6b14-e94a-4bb0-a034-60c355928551-kube-api-access-bxjpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026656 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026667 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026675 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22e6b14-e94a-4bb0-a034-60c355928551-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.026686 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb2kr\" (UniqueName: \"kubernetes.io/projected/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e-kube-api-access-fb2kr\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.698978 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerStarted","Data":"db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa"} Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.709947 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpdcn" event={"ID":"c22e6b14-e94a-4bb0-a034-60c355928551","Type":"ContainerDied","Data":"be1513768022a765d8528f5739925bf8c2f745a2c54942f090b5b47b0cc445fd"} Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.709970 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-48l6g" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.710018 4949 scope.go:117] "RemoveContainer" containerID="eb13f79c8898fb4db25f41d0710d9eebb2e989ba1d1131c8ac2a12ca352871c0" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.710027 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpdcn" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.718232 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kp4rp" podStartSLOduration=3.080959181 podStartE2EDuration="8.718207273s" podCreationTimestamp="2026-01-20 15:06:27 +0000 UTC" firstStartedPulling="2026-01-20 15:06:29.038652512 +0000 UTC m=+984.848483370" lastFinishedPulling="2026-01-20 15:06:34.675900604 +0000 UTC m=+990.485731462" observedRunningTime="2026-01-20 15:06:35.715762584 +0000 UTC m=+991.525593442" watchObservedRunningTime="2026-01-20 15:06:35.718207273 +0000 UTC m=+991.528038141" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.760834 4949 scope.go:117] "RemoveContainer" containerID="df65c94aa13ba8c6b5396fc92732fe87ae0f8f77c763f304cef724603233c87d" Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.761510 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.767970 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpdcn"] Jan 20 15:06:35 crc kubenswrapper[4949]: I0120 15:06:35.780976 4949 scope.go:117] "RemoveContainer" containerID="91e2413b0353cd08abc8762744ba059b7a151348ee4608eb1c7a3ef0b3f6a658" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.216507 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226756 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226775 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226785 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226791 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226799 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-utilities" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226805 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-utilities" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226819 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226826 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226832 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-content" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226838 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="extract-content" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226847 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerName="glance-db-sync" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226852 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerName="glance-db-sync" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226863 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226869 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226879 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226886 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226893 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226898 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: E0120 15:06:36.226916 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.226921 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227068 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" containerName="registry-server" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227079 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" containerName="glance-db-sync" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227090 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227098 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227114 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227121 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" containerName="mariadb-account-create-update" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227132 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227138 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" containerName="mariadb-database-create" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.227944 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.247759 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358006 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358081 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358119 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358170 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.358193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460119 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.460134 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461148 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461280 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.461249 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.501984 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"dnsmasq-dns-554567b4f7-pspnz\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.583769 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:36 crc kubenswrapper[4949]: I0120 15:06:36.801700 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22e6b14-e94a-4bb0-a034-60c355928551" path="/var/lib/kubelet/pods/c22e6b14-e94a-4bb0-a034-60c355928551/volumes" Jan 20 15:06:37 crc kubenswrapper[4949]: W0120 15:06:37.050533 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod946f53c7_f2c2_4ffe_8378_32e4d2ae5d88.slice/crio-90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee WatchSource:0}: Error finding container 90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee: Status 404 returned error can't find the container with id 90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.052320 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.446793 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.730423 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerStarted","Data":"90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee"} Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.804759 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.806503 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.815694 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.878896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.879220 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.879955 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.981672 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.982131 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.982328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.982732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:37 crc kubenswrapper[4949]: I0120 15:06:37.983196 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.004473 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"redhat-operators-s8xd7\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.122635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.409578 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:06:38 crc kubenswrapper[4949]: I0120 15:06:38.759196 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerStarted","Data":"eeb7cbee20ed2b90b6962ccace8e1102267ebecaac9b546c8fad51ab9499282d"} Jan 20 15:06:40 crc kubenswrapper[4949]: I0120 15:06:40.776749 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerStarted","Data":"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb"} Jan 20 15:06:40 crc kubenswrapper[4949]: I0120 15:06:40.778695 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerStarted","Data":"7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8"} Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.787187 4949 generic.go:334] "Generic (PLEG): container finished" podID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerID="7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8" exitCode=0 Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.787272 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8"} Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.788497 4949 generic.go:334] "Generic (PLEG): container finished" podID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" exitCode=0 Jan 20 15:06:41 crc kubenswrapper[4949]: I0120 15:06:41.788549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerDied","Data":"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb"} Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.803846 4949 generic.go:334] "Generic (PLEG): container finished" podID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerID="db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa" exitCode=0 Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.803933 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerDied","Data":"db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa"} Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.808409 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerStarted","Data":"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36"} Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.809233 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:42 crc kubenswrapper[4949]: I0120 15:06:42.855632 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" podStartSLOduration=6.855596354 podStartE2EDuration="6.855596354s" podCreationTimestamp="2026-01-20 15:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:42.850208151 +0000 UTC m=+998.660039009" watchObservedRunningTime="2026-01-20 15:06:42.855596354 +0000 UTC m=+998.665427222" Jan 20 15:06:43 crc kubenswrapper[4949]: I0120 15:06:43.818624 4949 generic.go:334] "Generic (PLEG): container finished" podID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerID="1bfde9055b8627100b5c93b232b289e018e33d5c7ac7bc51099c7c1742a2725c" exitCode=0 Jan 20 15:06:43 crc kubenswrapper[4949]: I0120 15:06:43.818681 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"1bfde9055b8627100b5c93b232b289e018e33d5c7ac7bc51099c7c1742a2725c"} Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.147133 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.197076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") pod \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.197126 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") pod \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.197201 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") pod \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\" (UID: \"a8e8050e-32dc-4014-9bc7-cd06d127eb38\") " Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.205677 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg" (OuterVolumeSpecName: "kube-api-access-g85tg") pod "a8e8050e-32dc-4014-9bc7-cd06d127eb38" (UID: "a8e8050e-32dc-4014-9bc7-cd06d127eb38"). InnerVolumeSpecName "kube-api-access-g85tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.219030 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8e8050e-32dc-4014-9bc7-cd06d127eb38" (UID: "a8e8050e-32dc-4014-9bc7-cd06d127eb38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.236365 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data" (OuterVolumeSpecName: "config-data") pod "a8e8050e-32dc-4014-9bc7-cd06d127eb38" (UID: "a8e8050e-32dc-4014-9bc7-cd06d127eb38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.300334 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g85tg\" (UniqueName: \"kubernetes.io/projected/a8e8050e-32dc-4014-9bc7-cd06d127eb38-kube-api-access-g85tg\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.300373 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.300387 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8e8050e-32dc-4014-9bc7-cd06d127eb38-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.827672 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerStarted","Data":"5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec"} Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.829172 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kp4rp" event={"ID":"a8e8050e-32dc-4014-9bc7-cd06d127eb38","Type":"ContainerDied","Data":"00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1"} Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.829208 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e619834bcc8bedaf48ea5ca3779b535449a29be26c43fe2759e2c9767d15f1" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.829182 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kp4rp" Jan 20 15:06:44 crc kubenswrapper[4949]: I0120 15:06:44.848595 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8xd7" podStartSLOduration=5.281441215 podStartE2EDuration="7.848577796s" podCreationTimestamp="2026-01-20 15:06:37 +0000 UTC" firstStartedPulling="2026-01-20 15:06:41.788998984 +0000 UTC m=+997.598829842" lastFinishedPulling="2026-01-20 15:06:44.356135565 +0000 UTC m=+1000.165966423" observedRunningTime="2026-01-20 15:06:44.842264063 +0000 UTC m=+1000.652094941" watchObservedRunningTime="2026-01-20 15:06:44.848577796 +0000 UTC m=+1000.658408654" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.115377 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.167706 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:45 crc kubenswrapper[4949]: E0120 15:06:45.168139 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerName="keystone-db-sync" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.168164 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerName="keystone-db-sync" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.168381 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" containerName="keystone-db-sync" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.169439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.187133 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.188905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.195471 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204231 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204740 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204785 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.204737 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.210239 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224300 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224366 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224433 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224508 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224629 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224763 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.224866 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.235629 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326352 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326461 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326506 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326565 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326627 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326694 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326718 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326756 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326797 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.326837 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.328141 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.330274 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.336086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.345093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.354043 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.354503 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.354896 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.355351 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.369362 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.390072 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"dnsmasq-dns-67795cd9-w5vt7\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.408291 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"keystone-bootstrap-wv77h\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.441108 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.442615 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.446674 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.446903 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.447465 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.447670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-rlxzz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.504242 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.504689 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.517569 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.569717 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.569990 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.570051 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.570462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.570639 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676527 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676567 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676597 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.676631 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.677310 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.677692 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.678552 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.732100 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.733632 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.738015 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.741563 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.741742 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.741876 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6qnbk" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.743824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"horizon-69655bc997-jlksz\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783576 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783634 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783671 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783744 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783787 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.783899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.808580 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.809642 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.816218 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m8tbw" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.816647 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.816882 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.845885 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.859000 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" containerID="cri-o://98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" gracePeriod=10 Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.869934 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.874973 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886031 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886161 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886177 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886212 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886246 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886286 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.886305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.887436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.904791 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.909576 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.910376 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.912965 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.926468 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.926618 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.926690 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.932806 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hgk97" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.936943 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.938724 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.945170 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.945536 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.953932 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"cinder-db-sync-2fwjt\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.990141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.990254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:45 crc kubenswrapper[4949]: I0120 15:06:45.990271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.001288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.004614 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.005870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.028178 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"neutron-db-sync-lbd6l\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.087576 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.088633 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.089897 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mk2w7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091044 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091540 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091582 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091661 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091768 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091797 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091823 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.091975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.092067 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.092107 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.098923 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.107359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.116605 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.134602 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.149394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.160664 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.162187 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.165796 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.167342 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.174909 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193636 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193933 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193954 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193977 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.193996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194071 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194118 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194145 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194168 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.194187 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.203789 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.204280 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.207422 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.218579 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.219185 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.221929 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.222999 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.223329 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.225759 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.226658 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.227300 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"barbican-db-sync-lm4wz\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.279901 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295130 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295212 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295257 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295295 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295318 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295343 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295359 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295403 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295444 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295486 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295507 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.295538 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.296964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.305474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.305604 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.312267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.322788 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"placement-db-sync-j9pm7\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.346081 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401504 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401591 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401624 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401729 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401758 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401795 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401836 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401862 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.401889 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.403226 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.405689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.405919 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.406239 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.406637 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.406746 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.408343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.412989 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.432061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"dnsmasq-dns-5b6dbdb6f5-2b8mk\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.434731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"horizon-cbd48cfd5-mt6hk\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.452938 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.457459 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.503965 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:46 crc kubenswrapper[4949]: W0120 15:06:46.513231 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod324ec7e2_de25_442e_851f_ffea56e932b2.slice/crio-77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b WatchSource:0}: Error finding container 77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b: Status 404 returned error can't find the container with id 77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.523982 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.669030 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:46 crc kubenswrapper[4949]: W0120 15:06:46.679314 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd34f223a_75f1_410c_8541_cbf8cc7793d0.slice/crio-9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721 WatchSource:0}: Error finding container 9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721: Status 404 returned error can't find the container with id 9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721 Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.811048 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.874266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerStarted","Data":"77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.888069 4949 generic.go:334] "Generic (PLEG): container finished" podID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" exitCode=0 Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.888267 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.894148 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerDied","Data":"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.894211 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-pspnz" event={"ID":"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88","Type":"ContainerDied","Data":"90eb568b282259805c4235ff0620c57b9c803fd0fb2e30ad6f12ef0ed5dba3ee"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.894232 4949 scope.go:117] "RemoveContainer" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.908753 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerStarted","Data":"9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721"} Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.911889 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.911980 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.912065 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.912163 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.912189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") pod \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\" (UID: \"946f53c7-f2c2-4ffe-8378-32e4d2ae5d88\") " Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.922440 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt" (OuterVolumeSpecName: "kube-api-access-2d8xt") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "kube-api-access-2d8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.943207 4949 scope.go:117] "RemoveContainer" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.977996 4949 scope.go:117] "RemoveContainer" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.979257 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:46 crc kubenswrapper[4949]: E0120 15:06:46.983679 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36\": container with ID starting with 98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36 not found: ID does not exist" containerID="98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.983728 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36"} err="failed to get container status \"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36\": rpc error: code = NotFound desc = could not find container \"98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36\": container with ID starting with 98ffc658321f99ea1c730a121cfdfdabed6e56809f2d2bc2c275fc74e3d09a36 not found: ID does not exist" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.983758 4949 scope.go:117] "RemoveContainer" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" Jan 20 15:06:46 crc kubenswrapper[4949]: E0120 15:06:46.986942 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb\": container with ID starting with 34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb not found: ID does not exist" containerID="34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb" Jan 20 15:06:46 crc kubenswrapper[4949]: I0120 15:06:46.986966 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb"} err="failed to get container status \"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb\": rpc error: code = NotFound desc = could not find container \"34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb\": container with ID starting with 34ff75cd3c2570e5ea0b63b78be0fbf3d337255fed6fc7f4fc2515409aa713fb not found: ID does not exist" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.000710 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.003138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.004312 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config" (OuterVolumeSpecName: "config") pod "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" (UID: "946f53c7-f2c2-4ffe-8378-32e4d2ae5d88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016450 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016486 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016496 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d8xt\" (UniqueName: \"kubernetes.io/projected/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-kube-api-access-2d8xt\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016509 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.016534 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.113472 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:06:47 crc kubenswrapper[4949]: W0120 15:06:47.126832 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1f491ae_d6c7_4cc9_90f2_f76910f86c81.slice/crio-a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762 WatchSource:0}: Error finding container a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762: Status 404 returned error can't find the container with id a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762 Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.137722 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.233471 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.239823 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-pspnz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.434886 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.467652 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.479855 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.485735 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.493301 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.559947 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.918285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69655bc997-jlksz" event={"ID":"f1f491ae-d6c7-4cc9-90f2-f76910f86c81","Type":"ContainerStarted","Data":"a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.920026 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerStarted","Data":"cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.920105 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" containerID="cri-o://cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea" gracePeriod=10 Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.922018 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerStarted","Data":"433534aab58a8907724519ebbdb734c9b17b626693f00598ad129acc054d365a"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.923421 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerStarted","Data":"d010c875444d4ba584246f10f1a99b66845b15ef9ef3b2384373a0f15b7f64f0"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.925352 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerStarted","Data":"a7bdb1a05ecb96436eaee5571c55a1026eac70b28bfa92211ab6b3111805bc2c"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.932405 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerStarted","Data":"f00ec8d28626a4cd0a80c63c891ae1ccadb47e0b90177f99a5486b458b879328"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.935435 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerStarted","Data":"1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.950960 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerStarted","Data":"a255ba2b9bedbb556f04da75175101bb69c927fe2e1d472e5ff955e9dfc35f8c"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.953184 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"d9198a98b9f1f6021caa331f5093846a0dd1690786dc4510142a57f8e1848ff4"} Jan 20 15:06:47 crc kubenswrapper[4949]: I0120 15:06:47.958225 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerStarted","Data":"34775a3d497f9b0858712e8d09736f5da67205a1958b3dd7d2f0dcef5907e8e7"} Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.003110 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.068136 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wv77h" podStartSLOduration=3.068116332 podStartE2EDuration="3.068116332s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:48.056445126 +0000 UTC m=+1003.866275984" watchObservedRunningTime="2026-01-20 15:06:48.068116332 +0000 UTC m=+1003.877947180" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081082 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:48 crc kubenswrapper[4949]: E0120 15:06:48.081413 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081425 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" Jan 20 15:06:48 crc kubenswrapper[4949]: E0120 15:06:48.081447 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="init" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081453 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="init" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.081678 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" containerName="dnsmasq-dns" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.082545 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.113835 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.123904 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.123950 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.191163 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241595 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241651 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.241805 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.343606 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.343921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.343968 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344672 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.344822 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.345320 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.353615 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.405399 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"horizon-68ccd6ddcc-h9gfp\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.703386 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.822332 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946f53c7-f2c2-4ffe-8378-32e4d2ae5d88" path="/var/lib/kubelet/pods/946f53c7-f2c2-4ffe-8378-32e4d2ae5d88/volumes" Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.971428 4949 generic.go:334] "Generic (PLEG): container finished" podID="324ec7e2-de25-442e-851f-ffea56e932b2" containerID="cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea" exitCode=0 Jan 20 15:06:48 crc kubenswrapper[4949]: I0120 15:06:48.974531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerDied","Data":"cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea"} Jan 20 15:06:49 crc kubenswrapper[4949]: I0120 15:06:49.211697 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s8xd7" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" probeResult="failure" output=< Jan 20 15:06:49 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:06:49 crc kubenswrapper[4949]: > Jan 20 15:06:49 crc kubenswrapper[4949]: W0120 15:06:49.241298 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1a77932_734e_416b_a182_5e84f6749d95.slice/crio-b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f WatchSource:0}: Error finding container b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f: Status 404 returned error can't find the container with id b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f Jan 20 15:06:49 crc kubenswrapper[4949]: I0120 15:06:49.246723 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:49 crc kubenswrapper[4949]: I0120 15:06:49.988953 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerStarted","Data":"b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f"} Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.626361 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782644 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782750 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.782853 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.783035 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") pod \"324ec7e2-de25-442e-851f-ffea56e932b2\" (UID: \"324ec7e2-de25-442e-851f-ffea56e932b2\") " Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.788488 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2" (OuterVolumeSpecName: "kube-api-access-xwnl2") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "kube-api-access-xwnl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.811787 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.832062 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.832253 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config" (OuterVolumeSpecName: "config") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.848175 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "324ec7e2-de25-442e-851f-ffea56e932b2" (UID: "324ec7e2-de25-442e-851f-ffea56e932b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885479 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885509 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885532 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885541 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwnl2\" (UniqueName: \"kubernetes.io/projected/324ec7e2-de25-442e-851f-ffea56e932b2-kube-api-access-xwnl2\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:50 crc kubenswrapper[4949]: I0120 15:06:50.885550 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/324ec7e2-de25-442e-851f-ffea56e932b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.006064 4949 generic.go:334] "Generic (PLEG): container finished" podID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" exitCode=0 Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.006142 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerDied","Data":"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f"} Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.017709 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerStarted","Data":"f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7"} Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.027611 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" event={"ID":"324ec7e2-de25-442e-851f-ffea56e932b2","Type":"ContainerDied","Data":"77a29181b9accdb4c8fff653e86970c765375a0a5c2daa3644fef700afeb303b"} Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.027669 4949 scope.go:117] "RemoveContainer" containerID="cfc090ae386590a9948cb8849fbfe025d46d1461fe828f8a14445d79a74c50ea" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.027796 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-w5vt7" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.066941 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lbd6l" podStartSLOduration=6.066886259 podStartE2EDuration="6.066886259s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:51.049693197 +0000 UTC m=+1006.859524055" watchObservedRunningTime="2026-01-20 15:06:51.066886259 +0000 UTC m=+1006.876717117" Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.111672 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:51 crc kubenswrapper[4949]: I0120 15:06:51.117926 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-w5vt7"] Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.067305 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerStarted","Data":"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8"} Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.067856 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.093806 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" podStartSLOduration=7.093792283 podStartE2EDuration="7.093792283s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:06:52.086678135 +0000 UTC m=+1007.896509013" watchObservedRunningTime="2026-01-20 15:06:52.093792283 +0000 UTC m=+1007.903623131" Jan 20 15:06:52 crc kubenswrapper[4949]: I0120 15:06:52.802676 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" path="/var/lib/kubelet/pods/324ec7e2-de25-442e-851f-ffea56e932b2/volumes" Jan 20 15:06:53 crc kubenswrapper[4949]: I0120 15:06:53.083753 4949 generic.go:334] "Generic (PLEG): container finished" podID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerID="1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa" exitCode=0 Jan 20 15:06:53 crc kubenswrapper[4949]: I0120 15:06:53.084363 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerDied","Data":"1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa"} Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.468153 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.487134 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:06:54 crc kubenswrapper[4949]: E0120 15:06:54.489545 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.489571 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.489761 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="324ec7e2-de25-442e-851f-ffea56e932b2" containerName="init" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.499983 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.503811 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.506753 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566599 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566646 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566696 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566727 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566756 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566782 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.566799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.588945 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.619131 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66d45cfc44-ltr94"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.621573 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.634496 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d45cfc44-ltr94"] Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672591 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672720 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672754 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672777 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672847 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.672878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.677186 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678547 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.678861 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.682707 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.694365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"horizon-68cb9b7c44-mz9j4\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-scripts\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08182d24-cea6-4daa-9dbb-efcb48b76434-logs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775848 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-combined-ca-bundle\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-config-data\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775908 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-tls-certs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775961 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqqv\" (UniqueName: \"kubernetes.io/projected/08182d24-cea6-4daa-9dbb-efcb48b76434-kube-api-access-wkqqv\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.775981 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-secret-key\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.830662 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877457 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-scripts\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877500 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08182d24-cea6-4daa-9dbb-efcb48b76434-logs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877597 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-combined-ca-bundle\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877623 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-config-data\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877661 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-tls-certs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877723 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkqqv\" (UniqueName: \"kubernetes.io/projected/08182d24-cea6-4daa-9dbb-efcb48b76434-kube-api-access-wkqqv\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.877742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-secret-key\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.879001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08182d24-cea6-4daa-9dbb-efcb48b76434-logs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.881469 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-secret-key\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.881954 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-scripts\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.882945 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08182d24-cea6-4daa-9dbb-efcb48b76434-config-data\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.883063 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-combined-ca-bundle\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.884427 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/08182d24-cea6-4daa-9dbb-efcb48b76434-horizon-tls-certs\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.910825 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkqqv\" (UniqueName: \"kubernetes.io/projected/08182d24-cea6-4daa-9dbb-efcb48b76434-kube-api-access-wkqqv\") pod \"horizon-66d45cfc44-ltr94\" (UID: \"08182d24-cea6-4daa-9dbb-efcb48b76434\") " pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:54 crc kubenswrapper[4949]: I0120 15:06:54.948022 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.430364 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486031 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486210 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486249 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.486375 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") pod \"d34f223a-75f1-410c-8541-cbf8cc7793d0\" (UID: \"d34f223a-75f1-410c-8541-cbf8cc7793d0\") " Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.492000 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.492689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.492781 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts" (OuterVolumeSpecName: "scripts") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.494845 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj" (OuterVolumeSpecName: "kube-api-access-jt9zj") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "kube-api-access-jt9zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.515665 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data" (OuterVolumeSpecName: "config-data") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.520104 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d34f223a-75f1-410c-8541-cbf8cc7793d0" (UID: "d34f223a-75f1-410c-8541-cbf8cc7793d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588386 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588422 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588440 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt9zj\" (UniqueName: \"kubernetes.io/projected/d34f223a-75f1-410c-8541-cbf8cc7793d0-kube-api-access-jt9zj\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588462 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588477 4949 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:55 crc kubenswrapper[4949]: I0120 15:06:55.588491 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d34f223a-75f1-410c-8541-cbf8cc7793d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.116129 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv77h" event={"ID":"d34f223a-75f1-410c-8541-cbf8cc7793d0","Type":"ContainerDied","Data":"9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721"} Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.116410 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e1a62d3ab82af9afb864b5c2b40ebe76a963963597870eef0ab8ce420a2e721" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.116229 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv77h" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.513669 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.563427 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.565266 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" containerID="cri-o://e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746" gracePeriod=10 Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.614573 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.621058 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wv77h"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.701013 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:06:56 crc kubenswrapper[4949]: E0120 15:06:56.701415 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerName="keystone-bootstrap" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.701432 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerName="keystone-bootstrap" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.701627 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" containerName="keystone-bootstrap" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.702230 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.706894 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707183 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707290 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707320 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.707582 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.714879 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.802423 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34f223a-75f1-410c-8541-cbf8cc7793d0" path="/var/lib/kubelet/pods/d34f223a-75f1-410c-8541-cbf8cc7793d0/volumes" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.808838 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.808986 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809062 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809107 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809291 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.809462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911115 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911199 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911247 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.911446 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.918637 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.923613 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.924003 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.942978 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.944599 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:56 crc kubenswrapper[4949]: I0120 15:06:56.952577 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"keystone-bootstrap-vx8lk\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:57 crc kubenswrapper[4949]: I0120 15:06:57.033375 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:06:57 crc kubenswrapper[4949]: I0120 15:06:57.124617 4949 generic.go:334] "Generic (PLEG): container finished" podID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerID="e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746" exitCode=0 Jan 20 15:06:57 crc kubenswrapper[4949]: I0120 15:06:57.124671 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerDied","Data":"e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746"} Jan 20 15:06:58 crc kubenswrapper[4949]: I0120 15:06:58.177807 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:58 crc kubenswrapper[4949]: I0120 15:06:58.228423 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:06:58 crc kubenswrapper[4949]: I0120 15:06:58.415933 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:07:00 crc kubenswrapper[4949]: I0120 15:07:00.147969 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8xd7" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" containerID="cri-o://5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" gracePeriod=2 Jan 20 15:07:00 crc kubenswrapper[4949]: I0120 15:07:00.887851 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 20 15:07:01 crc kubenswrapper[4949]: I0120 15:07:01.159276 4949 generic.go:334] "Generic (PLEG): container finished" podID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" exitCode=0 Jan 20 15:07:01 crc kubenswrapper[4949]: I0120 15:07:01.159322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec"} Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.785294 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.785792 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjkqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-j9pm7_openstack(1f96f008-7e3c-4512-bddd-51e42a0c7ce2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.786981 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-j9pm7" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.806655 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.806813 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8fh76h586hd6hbdh574h5dh65fh548h65ch7ch554h5cdhc4h57ch5dchfch8ch568hbh5c5h65ch545hf6h55fh99h5cchc4h5fch559hb5h589q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fprdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-69655bc997-jlksz_openstack(f1f491ae-d6c7-4cc9-90f2-f76910f86c81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:03 crc kubenswrapper[4949]: E0120 15:07:03.811316 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-69655bc997-jlksz" podUID="f1f491ae-d6c7-4cc9-90f2-f76910f86c81" Jan 20 15:07:04 crc kubenswrapper[4949]: E0120 15:07:04.182009 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-j9pm7" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" Jan 20 15:07:11 crc kubenswrapper[4949]: I0120 15:07:05.887157 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.123207 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.123656 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.124066 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 15:07:11 crc kubenswrapper[4949]: E0120 15:07:08.124098 4949 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-s8xd7" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:11 crc kubenswrapper[4949]: I0120 15:07:10.887119 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-2vttb" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 20 15:07:11 crc kubenswrapper[4949]: I0120 15:07:10.887564 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.186157 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.186752 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn6jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lm4wz_openstack(f476712d-366a-4948-b282-66660a6d81c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.187939 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lm4wz" podUID="f476712d-366a-4948-b282-66660a6d81c4" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.263360 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.284106 4949 generic.go:334] "Generic (PLEG): container finished" podID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerID="f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7" exitCode=0 Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.284168 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerDied","Data":"f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7"} Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.289574 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69655bc997-jlksz" event={"ID":"f1f491ae-d6c7-4cc9-90f2-f76910f86c81","Type":"ContainerDied","Data":"a4e64f4bd7aa8fdb7cd6b2e1d7cbf4ea5d8d024b6176e0be5e70e08a567f8762"} Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.289683 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69655bc997-jlksz" Jan 20 15:07:13 crc kubenswrapper[4949]: E0120 15:07:13.291342 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lm4wz" podUID="f476712d-366a-4948-b282-66660a6d81c4" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316016 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316086 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316128 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316197 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.316229 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") pod \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\" (UID: \"f1f491ae-d6c7-4cc9-90f2-f76910f86c81\") " Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.317089 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts" (OuterVolumeSpecName: "scripts") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.318830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data" (OuterVolumeSpecName: "config-data") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.324400 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.324641 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs" (OuterVolumeSpecName: "logs") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.329763 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk" (OuterVolumeSpecName: "kube-api-access-fprdk") pod "f1f491ae-d6c7-4cc9-90f2-f76910f86c81" (UID: "f1f491ae-d6c7-4cc9-90f2-f76910f86c81"). InnerVolumeSpecName "kube-api-access-fprdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419317 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419378 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419388 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419398 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.419406 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fprdk\" (UniqueName: \"kubernetes.io/projected/f1f491ae-d6c7-4cc9-90f2-f76910f86c81-kube-api-access-fprdk\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.716490 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:07:13 crc kubenswrapper[4949]: I0120 15:07:13.725490 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69655bc997-jlksz"] Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.301028 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8xd7" event={"ID":"2bda8fe4-4e94-40d2-83fb-916ac550b698","Type":"ContainerDied","Data":"eeb7cbee20ed2b90b6962ccace8e1102267ebecaac9b546c8fad51ab9499282d"} Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.301070 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb7cbee20ed2b90b6962ccace8e1102267ebecaac9b546c8fad51ab9499282d" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.303938 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-2vttb" event={"ID":"e64d5fa0-6c79-43df-9331-f9024cc3c9f4","Type":"ContainerDied","Data":"f9f5d1619d230fe16e03f871babb60f8165c69870d0389a062447e2bf198b69d"} Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.304038 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9f5d1619d230fe16e03f871babb60f8165c69870d0389a062447e2bf198b69d" Jan 20 15:07:14 crc kubenswrapper[4949]: E0120 15:07:14.327332 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 20 15:07:14 crc kubenswrapper[4949]: E0120 15:07:14.327840 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4htd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2fwjt_openstack(c18369cb-0b5b-40f7-bc73-af04fb510f31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 15:07:14 crc kubenswrapper[4949]: E0120 15:07:14.329912 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2fwjt" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.496694 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.526900 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544602 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544644 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544678 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544745 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.544787 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") pod \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\" (UID: \"e64d5fa0-6c79-43df-9331-f9024cc3c9f4\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.568475 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p" (OuterVolumeSpecName: "kube-api-access-t876p") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "kube-api-access-t876p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.648588 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") pod \"2bda8fe4-4e94-40d2-83fb-916ac550b698\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.650142 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") pod \"2bda8fe4-4e94-40d2-83fb-916ac550b698\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.650299 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") pod \"2bda8fe4-4e94-40d2-83fb-916ac550b698\" (UID: \"2bda8fe4-4e94-40d2-83fb-916ac550b698\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.650809 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t876p\" (UniqueName: \"kubernetes.io/projected/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-kube-api-access-t876p\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.657013 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities" (OuterVolumeSpecName: "utilities") pod "2bda8fe4-4e94-40d2-83fb-916ac550b698" (UID: "2bda8fe4-4e94-40d2-83fb-916ac550b698"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.666164 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj" (OuterVolumeSpecName: "kube-api-access-8vszj") pod "2bda8fe4-4e94-40d2-83fb-916ac550b698" (UID: "2bda8fe4-4e94-40d2-83fb-916ac550b698"). InnerVolumeSpecName "kube-api-access-8vszj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.701807 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.711209 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.711689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.716946 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config" (OuterVolumeSpecName: "config") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753183 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") pod \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") pod \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753273 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") pod \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\" (UID: \"40994e0d-d911-4b6a-9ae9-96fbc4be8a36\") " Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753709 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753726 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vszj\" (UniqueName: \"kubernetes.io/projected/2bda8fe4-4e94-40d2-83fb-916ac550b698-kube-api-access-8vszj\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753736 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753745 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.753753 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.758427 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r" (OuterVolumeSpecName: "kube-api-access-rlj6r") pod "40994e0d-d911-4b6a-9ae9-96fbc4be8a36" (UID: "40994e0d-d911-4b6a-9ae9-96fbc4be8a36"). InnerVolumeSpecName "kube-api-access-rlj6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.764889 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e64d5fa0-6c79-43df-9331-f9024cc3c9f4" (UID: "e64d5fa0-6c79-43df-9331-f9024cc3c9f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.778617 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config" (OuterVolumeSpecName: "config") pod "40994e0d-d911-4b6a-9ae9-96fbc4be8a36" (UID: "40994e0d-d911-4b6a-9ae9-96fbc4be8a36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.784669 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40994e0d-d911-4b6a-9ae9-96fbc4be8a36" (UID: "40994e0d-d911-4b6a-9ae9-96fbc4be8a36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.794445 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bda8fe4-4e94-40d2-83fb-916ac550b698" (UID: "2bda8fe4-4e94-40d2-83fb-916ac550b698"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.798789 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f491ae-d6c7-4cc9-90f2-f76910f86c81" path="/var/lib/kubelet/pods/f1f491ae-d6c7-4cc9-90f2-f76910f86c81/volumes" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.848318 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:07:14 crc kubenswrapper[4949]: W0120 15:07:14.850806 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod706427a3_6d1f_4a5e_9b50_d84499daec46.slice/crio-f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b WatchSource:0}: Error finding container f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b: Status 404 returned error can't find the container with id f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854858 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8fe4-4e94-40d2-83fb-916ac550b698-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854883 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlj6r\" (UniqueName: \"kubernetes.io/projected/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-kube-api-access-rlj6r\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854895 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854907 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/40994e0d-d911-4b6a-9ae9-96fbc4be8a36-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.854916 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e64d5fa0-6c79-43df-9331-f9024cc3c9f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.945934 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.952592 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d45cfc44-ltr94"] Jan 20 15:07:14 crc kubenswrapper[4949]: W0120 15:07:14.953618 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b5f79a_1adc_4ec3_a257_ce37600d2357.slice/crio-a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36 WatchSource:0}: Error finding container a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36: Status 404 returned error can't find the container with id a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36 Jan 20 15:07:14 crc kubenswrapper[4949]: W0120 15:07:14.955170 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08182d24_cea6_4daa_9dbb_efcb48b76434.slice/crio-7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7 WatchSource:0}: Error finding container 7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7: Status 404 returned error can't find the container with id 7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7 Jan 20 15:07:14 crc kubenswrapper[4949]: I0120 15:07:14.957927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.315244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d45cfc44-ltr94" event={"ID":"08182d24-cea6-4daa-9dbb-efcb48b76434","Type":"ContainerStarted","Data":"9b7fcb23cf1b22d54783dddccc4d6105dc9312897b69812b01661d40eb317c5e"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.315608 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d45cfc44-ltr94" event={"ID":"08182d24-cea6-4daa-9dbb-efcb48b76434","Type":"ContainerStarted","Data":"7c1819de17dc876eccd8d7e8466e58b09cdf3fb78bb8d947dc66378df3c334d7"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.317296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lbd6l" event={"ID":"40994e0d-d911-4b6a-9ae9-96fbc4be8a36","Type":"ContainerDied","Data":"34775a3d497f9b0858712e8d09736f5da67205a1958b3dd7d2f0dcef5907e8e7"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.317342 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34775a3d497f9b0858712e8d09736f5da67205a1958b3dd7d2f0dcef5907e8e7" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.317303 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lbd6l" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerStarted","Data":"1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325189 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerStarted","Data":"893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325318 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cbd48cfd5-mt6hk" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" containerID="cri-o://893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.325414 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cbd48cfd5-mt6hk" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" containerID="cri-o://1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.345195 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerStarted","Data":"03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.345260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerStarted","Data":"89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.345271 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerStarted","Data":"f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354103 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerStarted","Data":"6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354138 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerStarted","Data":"de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354217 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68ccd6ddcc-h9gfp" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" containerID="cri-o://de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354448 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68ccd6ddcc-h9gfp" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" containerID="cri-o://6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6" gracePeriod=30 Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.354776 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cbd48cfd5-mt6hk" podStartSLOduration=3.631959758 podStartE2EDuration="30.354766021s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.516938402 +0000 UTC m=+1003.326769260" lastFinishedPulling="2026-01-20 15:07:14.239744645 +0000 UTC m=+1030.049575523" observedRunningTime="2026-01-20 15:07:15.345751042 +0000 UTC m=+1031.155581920" watchObservedRunningTime="2026-01-20 15:07:15.354766021 +0000 UTC m=+1031.164596879" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.356929 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerStarted","Data":"32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.356980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerStarted","Data":"a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.358690 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-2vttb" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.358960 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668"} Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.359032 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8xd7" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.360284 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2fwjt" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.403568 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68ccd6ddcc-h9gfp" podStartSLOduration=2.327409965 podStartE2EDuration="27.403550119s" podCreationTimestamp="2026-01-20 15:06:48 +0000 UTC" firstStartedPulling="2026-01-20 15:06:49.243571011 +0000 UTC m=+1005.053401859" lastFinishedPulling="2026-01-20 15:07:14.319711145 +0000 UTC m=+1030.129542013" observedRunningTime="2026-01-20 15:07:15.382532563 +0000 UTC m=+1031.192363411" watchObservedRunningTime="2026-01-20 15:07:15.403550119 +0000 UTC m=+1031.213380967" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.430072 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.466953 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vx8lk" podStartSLOduration=19.466936767 podStartE2EDuration="19.466936767s" podCreationTimestamp="2026-01-20 15:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:15.434275027 +0000 UTC m=+1031.244105875" watchObservedRunningTime="2026-01-20 15:07:15.466936767 +0000 UTC m=+1031.276767625" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.470771 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8xd7"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.499303 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.505040 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-2vttb"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.524282 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.530907 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-utilities" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.530951 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-utilities" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.530970 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.530977 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.530988 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-content" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.530995 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="extract-content" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.531005 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="init" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531010 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="init" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.531024 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531030 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:15 crc kubenswrapper[4949]: E0120 15:07:15.531048 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerName="neutron-db-sync" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531054 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerName="neutron-db-sync" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531280 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" containerName="neutron-db-sync" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531297 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" containerName="dnsmasq-dns" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.531308 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" containerName="registry-server" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.532208 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.598085 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683348 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683418 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683483 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.683543 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785568 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785662 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.785699 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.786771 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.786906 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.786951 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.787424 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.809721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"dnsmasq-dns-5f66db59b9-v55wx\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.833039 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.838732 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.843646 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.843913 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.844042 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m8tbw" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.844185 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.860910 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.902004 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988502 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988588 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988606 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:15 crc kubenswrapper[4949]: I0120 15:07:15.988673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091284 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091314 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091336 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.091357 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.099510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.100894 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.102334 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.108900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.119230 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"neutron-56bb6988d6-9n8x4\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.199184 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.379544 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d45cfc44-ltr94" event={"ID":"08182d24-cea6-4daa-9dbb-efcb48b76434","Type":"ContainerStarted","Data":"50a40c3b443c9d1f88865f231374a380f38e032a993dc472376f4b5afa9af43b"} Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.416018 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66d45cfc44-ltr94" podStartSLOduration=22.415999899 podStartE2EDuration="22.415999899s" podCreationTimestamp="2026-01-20 15:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:16.408982153 +0000 UTC m=+1032.218813031" watchObservedRunningTime="2026-01-20 15:07:16.415999899 +0000 UTC m=+1032.225830757" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.438806 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68cb9b7c44-mz9j4" podStartSLOduration=22.438789421 podStartE2EDuration="22.438789421s" podCreationTimestamp="2026-01-20 15:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:16.435228887 +0000 UTC m=+1032.245059745" watchObservedRunningTime="2026-01-20 15:07:16.438789421 +0000 UTC m=+1032.248620279" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.471974 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.525094 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.812251 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bda8fe4-4e94-40d2-83fb-916ac550b698" path="/var/lib/kubelet/pods/2bda8fe4-4e94-40d2-83fb-916ac550b698/volumes" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.813103 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64d5fa0-6c79-43df-9331-f9024cc3c9f4" path="/var/lib/kubelet/pods/e64d5fa0-6c79-43df-9331-f9024cc3c9f4/volumes" Jan 20 15:07:16 crc kubenswrapper[4949]: I0120 15:07:16.813632 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.386548 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerStarted","Data":"ddacbe5809c0f3426708e64d9337ca1ab93d7f38d1a8f505676198c5a7a916e0"} Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.388160 4949 generic.go:334] "Generic (PLEG): container finished" podID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerID="8c227b56e33a53d202583a4f1ddca6603645856cbfcd9ad6c053606a3845fa21" exitCode=0 Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.388397 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerDied","Data":"8c227b56e33a53d202583a4f1ddca6603645856cbfcd9ad6c053606a3845fa21"} Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.388492 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerStarted","Data":"7264a419821a7cd4155fa26254f761dbcc032333908b45daed1c6c1c517da1c9"} Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.861755 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b8cd78967-6cmpj"] Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.863482 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.865899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.866159 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.877029 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8cd78967-6cmpj"] Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.927712 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-public-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.927782 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.927912 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-ovndb-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928127 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-internal-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928187 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hn82\" (UniqueName: \"kubernetes.io/projected/dae84f47-70ef-4a10-ae62-dae601b0de81-kube-api-access-7hn82\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928222 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-combined-ca-bundle\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:17 crc kubenswrapper[4949]: I0120 15:07:17.928358 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-httpd-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.029900 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-internal-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.029967 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hn82\" (UniqueName: \"kubernetes.io/projected/dae84f47-70ef-4a10-ae62-dae601b0de81-kube-api-access-7hn82\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-combined-ca-bundle\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-httpd-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-public-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030204 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.030245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-ovndb-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037201 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-public-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037284 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-combined-ca-bundle\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-httpd-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.037722 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-ovndb-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.045194 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-internal-tls-certs\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.048581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hn82\" (UniqueName: \"kubernetes.io/projected/dae84f47-70ef-4a10-ae62-dae601b0de81-kube-api-access-7hn82\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.049343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dae84f47-70ef-4a10-ae62-dae601b0de81-config\") pod \"neutron-6b8cd78967-6cmpj\" (UID: \"dae84f47-70ef-4a10-ae62-dae601b0de81\") " pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.180493 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.403545 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerStarted","Data":"65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.404983 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.408662 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.410210 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerStarted","Data":"5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.410238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerStarted","Data":"bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48"} Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.411109 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.425583 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" podStartSLOduration=3.425568374 podStartE2EDuration="3.425568374s" podCreationTimestamp="2026-01-20 15:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:18.421892117 +0000 UTC m=+1034.231722995" watchObservedRunningTime="2026-01-20 15:07:18.425568374 +0000 UTC m=+1034.235399232" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.450878 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56bb6988d6-9n8x4" podStartSLOduration=3.450851397 podStartE2EDuration="3.450851397s" podCreationTimestamp="2026-01-20 15:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:18.443906904 +0000 UTC m=+1034.253737762" watchObservedRunningTime="2026-01-20 15:07:18.450851397 +0000 UTC m=+1034.260682285" Jan 20 15:07:18 crc kubenswrapper[4949]: I0120 15:07:18.704309 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:07:19 crc kubenswrapper[4949]: I0120 15:07:19.265252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8cd78967-6cmpj"] Jan 20 15:07:19 crc kubenswrapper[4949]: I0120 15:07:19.432151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8cd78967-6cmpj" event={"ID":"dae84f47-70ef-4a10-ae62-dae601b0de81","Type":"ContainerStarted","Data":"7dec160cb1d986d5f09d779a008dfbb52758466dc46ab88b396d87cf74881d6b"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.443359 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8cd78967-6cmpj" event={"ID":"dae84f47-70ef-4a10-ae62-dae601b0de81","Type":"ContainerStarted","Data":"6a5ebdc1710d4ea48ad99f93aebfa15d6a020551dd135ac00dfb5980f16b0210"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.443934 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8cd78967-6cmpj" event={"ID":"dae84f47-70ef-4a10-ae62-dae601b0de81","Type":"ContainerStarted","Data":"6c809ddca02d8d3c380965b052e8c8bbdebd3de052894826ecfa7932a84693c9"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.443959 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.445303 4949 generic.go:334] "Generic (PLEG): container finished" podID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerID="32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2" exitCode=0 Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.445365 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerDied","Data":"32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.447879 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerStarted","Data":"57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5"} Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.472464 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b8cd78967-6cmpj" podStartSLOduration=3.472445549 podStartE2EDuration="3.472445549s" podCreationTimestamp="2026-01-20 15:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:20.466983364 +0000 UTC m=+1036.276814222" watchObservedRunningTime="2026-01-20 15:07:20.472445549 +0000 UTC m=+1036.282276407" Jan 20 15:07:20 crc kubenswrapper[4949]: I0120 15:07:20.506361 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j9pm7" podStartSLOduration=3.581924131 podStartE2EDuration="35.506343589s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.585162936 +0000 UTC m=+1003.394993794" lastFinishedPulling="2026-01-20 15:07:19.509582394 +0000 UTC m=+1035.319413252" observedRunningTime="2026-01-20 15:07:20.504213611 +0000 UTC m=+1036.314044469" watchObservedRunningTime="2026-01-20 15:07:20.506343589 +0000 UTC m=+1036.316174447" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.478479 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerID="57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5" exitCode=0 Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.478553 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerDied","Data":"57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5"} Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.833931 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.834798 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.949356 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:24 crc kubenswrapper[4949]: I0120 15:07:24.949662 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.904703 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.956414 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.956637 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" containerID="cri-o://ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" gracePeriod=10 Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.978342 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:07:25 crc kubenswrapper[4949]: I0120 15:07:25.982148 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112661 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112771 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112812 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112828 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112916 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112935 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.112972 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113038 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") pod \"26b5f79a-1adc-4ec3-a257-ce37600d2357\" (UID: \"26b5f79a-1adc-4ec3-a257-ce37600d2357\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.113076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") pod \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\" (UID: \"1f96f008-7e3c-4512-bddd-51e42a0c7ce2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.114138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs" (OuterVolumeSpecName: "logs") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.119799 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts" (OuterVolumeSpecName: "scripts") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.123068 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.123215 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw" (OuterVolumeSpecName: "kube-api-access-qjkqw") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "kube-api-access-qjkqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.125647 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx" (OuterVolumeSpecName: "kube-api-access-q2ngx") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "kube-api-access-q2ngx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.126381 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.127743 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts" (OuterVolumeSpecName: "scripts") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.147660 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.151708 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data" (OuterVolumeSpecName: "config-data") pod "26b5f79a-1adc-4ec3-a257-ce37600d2357" (UID: "26b5f79a-1adc-4ec3-a257-ce37600d2357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.153570 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data" (OuterVolumeSpecName: "config-data") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.155540 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f96f008-7e3c-4512-bddd-51e42a0c7ce2" (UID: "1f96f008-7e3c-4512-bddd-51e42a0c7ce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.214640 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.214847 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.214942 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215014 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215086 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215150 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ngx\" (UniqueName: \"kubernetes.io/projected/26b5f79a-1adc-4ec3-a257-ce37600d2357-kube-api-access-q2ngx\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215247 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215321 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215385 4949 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215446 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjkqw\" (UniqueName: \"kubernetes.io/projected/1f96f008-7e3c-4512-bddd-51e42a0c7ce2-kube-api-access-qjkqw\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.215591 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b5f79a-1adc-4ec3-a257-ce37600d2357-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.384838 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.495935 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497528 4949 generic.go:334] "Generic (PLEG): container finished" podID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" exitCode=0 Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497554 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerDied","Data":"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497640 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" event={"ID":"de517a3d-702a-4488-9a61-c1037cbdd5a2","Type":"ContainerDied","Data":"f00ec8d28626a4cd0a80c63c891ae1ccadb47e0b90177f99a5486b458b879328"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497665 4949 scope.go:117] "RemoveContainer" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.497577 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.502807 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vx8lk" event={"ID":"26b5f79a-1adc-4ec3-a257-ce37600d2357","Type":"ContainerDied","Data":"a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.502830 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b9b64dc1a9ed030f6af9cec18dbf043569c5348bf9b16a0cc27b8f07ccac36" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.502862 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vx8lk" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.504584 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9pm7" event={"ID":"1f96f008-7e3c-4512-bddd-51e42a0c7ce2","Type":"ContainerDied","Data":"a255ba2b9bedbb556f04da75175101bb69c927fe2e1d472e5ff955e9dfc35f8c"} Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.504605 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a255ba2b9bedbb556f04da75175101bb69c927fe2e1d472e5ff955e9dfc35f8c" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.504647 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9pm7" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.518828 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.518991 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.519075 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.519168 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.519238 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") pod \"de517a3d-702a-4488-9a61-c1037cbdd5a2\" (UID: \"de517a3d-702a-4488-9a61-c1037cbdd5a2\") " Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.542725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2" (OuterVolumeSpecName: "kube-api-access-55xc2") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "kube-api-access-55xc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.550598 4949 scope.go:117] "RemoveContainer" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.572650 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.577102 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592009 4949 scope.go:117] "RemoveContainer" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.592439 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8\": container with ID starting with ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8 not found: ID does not exist" containerID="ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592473 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8"} err="failed to get container status \"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8\": rpc error: code = NotFound desc = could not find container \"ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8\": container with ID starting with ee8d87fe2ffdcdd8f6013b612403ddff6e78c83287e88e65b952df7c217a52c8 not found: ID does not exist" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592532 4949 scope.go:117] "RemoveContainer" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.592794 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f\": container with ID starting with 55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f not found: ID does not exist" containerID="55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.592836 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f"} err="failed to get container status \"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f\": rpc error: code = NotFound desc = could not find container \"55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f\": container with ID starting with 55f1d62d1fcadc6fed285912f201742fde3945d04ed6ca45f942852b7bdb069f not found: ID does not exist" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.604031 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.608318 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config" (OuterVolumeSpecName: "config") pod "de517a3d-702a-4488-9a61-c1037cbdd5a2" (UID: "de517a3d-702a-4488-9a61-c1037cbdd5a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609481 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-754d6d4c8d-v7txj"] Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609857 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="init" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609872 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="init" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609882 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609889 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609899 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerName="keystone-bootstrap" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609905 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerName="keystone-bootstrap" Jan 20 15:07:26 crc kubenswrapper[4949]: E0120 15:07:26.609916 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerName="placement-db-sync" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.609922 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerName="placement-db-sync" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.610065 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" containerName="placement-db-sync" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.610082 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" containerName="dnsmasq-dns" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.610122 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" containerName="keystone-bootstrap" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.611296 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615274 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mk2w7" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615422 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615494 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615556 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.615656 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622403 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622427 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622438 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55xc2\" (UniqueName: \"kubernetes.io/projected/de517a3d-702a-4488-9a61-c1037cbdd5a2-kube-api-access-55xc2\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622446 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.622455 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de517a3d-702a-4488-9a61-c1037cbdd5a2-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.626533 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-754d6d4c8d-v7txj"] Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724417 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-combined-ca-bundle\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724487 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-public-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724542 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-internal-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724625 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-scripts\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724654 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-config-data\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724699 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69138579-1fa8-4d89-b94f-46e3424d604c-logs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.724732 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknbn\" (UniqueName: \"kubernetes.io/projected/69138579-1fa8-4d89-b94f-46e3424d604c-kube-api-access-nknbn\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826050 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-config-data\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826119 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69138579-1fa8-4d89-b94f-46e3424d604c-logs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826143 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nknbn\" (UniqueName: \"kubernetes.io/projected/69138579-1fa8-4d89-b94f-46e3424d604c-kube-api-access-nknbn\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-combined-ca-bundle\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826236 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-public-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826264 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-internal-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.826291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-scripts\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.827247 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69138579-1fa8-4d89-b94f-46e3424d604c-logs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.832824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-scripts\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.832910 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-combined-ca-bundle\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.833268 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-public-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.833558 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-internal-tls-certs\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.836361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69138579-1fa8-4d89-b94f-46e3424d604c-config-data\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.855094 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nknbn\" (UniqueName: \"kubernetes.io/projected/69138579-1fa8-4d89-b94f-46e3424d604c-kube-api-access-nknbn\") pod \"placement-754d6d4c8d-v7txj\" (UID: \"69138579-1fa8-4d89-b94f-46e3424d604c\") " pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.942566 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.950903 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-2b8mk"] Jan 20 15:07:26 crc kubenswrapper[4949]: I0120 15:07:26.969859 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.099370 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b69c674cf-wdfrq"] Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.107472 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.139303 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.144326 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.155153 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.155571 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.155709 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v78db" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.166176 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.195708 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b69c674cf-wdfrq"] Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254465 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2q9\" (UniqueName: \"kubernetes.io/projected/7dd53c2b-505a-4783-9e2a-34857e6158ea-kube-api-access-cz2q9\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254568 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-fernet-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254623 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-config-data\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254654 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-credential-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-public-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254689 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-combined-ca-bundle\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-scripts\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.254769 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-internal-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357485 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz2q9\" (UniqueName: \"kubernetes.io/projected/7dd53c2b-505a-4783-9e2a-34857e6158ea-kube-api-access-cz2q9\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357579 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-fernet-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357627 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-config-data\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357664 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-credential-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357690 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-public-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357712 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-combined-ca-bundle\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357764 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-scripts\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.357819 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-internal-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.364655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-config-data\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.365081 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-internal-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.365942 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-credential-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.366288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-combined-ca-bundle\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.366949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-fernet-keys\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.370985 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-public-tls-certs\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.373001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd53c2b-505a-4783-9e2a-34857e6158ea-scripts\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.377875 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz2q9\" (UniqueName: \"kubernetes.io/projected/7dd53c2b-505a-4783-9e2a-34857e6158ea-kube-api-access-cz2q9\") pod \"keystone-7b69c674cf-wdfrq\" (UID: \"7dd53c2b-505a-4783-9e2a-34857e6158ea\") " pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.462682 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.567373 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-754d6d4c8d-v7txj"] Jan 20 15:07:27 crc kubenswrapper[4949]: I0120 15:07:27.945069 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b69c674cf-wdfrq"] Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.546266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b69c674cf-wdfrq" event={"ID":"7dd53c2b-505a-4783-9e2a-34857e6158ea","Type":"ContainerStarted","Data":"bf8105bb971e0d08141087f8b97079725c4f104f877781f4eea677ba659357f5"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.546599 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.546610 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b69c674cf-wdfrq" event={"ID":"7dd53c2b-505a-4783-9e2a-34857e6158ea","Type":"ContainerStarted","Data":"b1eb891e17f431d0cbea0e94f6adb27f17d31a16563a13bef867a35db25b798e"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.549844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754d6d4c8d-v7txj" event={"ID":"69138579-1fa8-4d89-b94f-46e3424d604c","Type":"ContainerStarted","Data":"d1ce75aa2076f268a07fda8dbf1ce6fe400bd6dcd83ab2f12e603da24dc24461"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.549881 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754d6d4c8d-v7txj" event={"ID":"69138579-1fa8-4d89-b94f-46e3424d604c","Type":"ContainerStarted","Data":"b40530f5d488f07d63141d181091d91a616a2221f4da44361e1a9618a85b4f37"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.549893 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754d6d4c8d-v7txj" event={"ID":"69138579-1fa8-4d89-b94f-46e3424d604c","Type":"ContainerStarted","Data":"58f3f8ca9a53b066df31967f56d7dcc93c42e798c7f5cc0df8e18866cbdc93f9"} Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.550050 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.550089 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.575290 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b69c674cf-wdfrq" podStartSLOduration=1.575268697 podStartE2EDuration="1.575268697s" podCreationTimestamp="2026-01-20 15:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:28.565141374 +0000 UTC m=+1044.374972232" watchObservedRunningTime="2026-01-20 15:07:28.575268697 +0000 UTC m=+1044.385099585" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.591363 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-754d6d4c8d-v7txj" podStartSLOduration=2.5913449 podStartE2EDuration="2.5913449s" podCreationTimestamp="2026-01-20 15:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:28.582238049 +0000 UTC m=+1044.392068897" watchObservedRunningTime="2026-01-20 15:07:28.5913449 +0000 UTC m=+1044.401175758" Jan 20 15:07:28 crc kubenswrapper[4949]: I0120 15:07:28.817907 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de517a3d-702a-4488-9a61-c1037cbdd5a2" path="/var/lib/kubelet/pods/de517a3d-702a-4488-9a61-c1037cbdd5a2/volumes" Jan 20 15:07:29 crc kubenswrapper[4949]: I0120 15:07:29.559343 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerStarted","Data":"8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7"} Jan 20 15:07:29 crc kubenswrapper[4949]: I0120 15:07:29.582846 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lm4wz" podStartSLOduration=3.58346006 podStartE2EDuration="44.58282605s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.526759677 +0000 UTC m=+1003.336590525" lastFinishedPulling="2026-01-20 15:07:28.526125657 +0000 UTC m=+1044.335956515" observedRunningTime="2026-01-20 15:07:29.577159779 +0000 UTC m=+1045.386990647" watchObservedRunningTime="2026-01-20 15:07:29.58282605 +0000 UTC m=+1045.392656908" Jan 20 15:07:31 crc kubenswrapper[4949]: I0120 15:07:31.581183 4949 generic.go:334] "Generic (PLEG): container finished" podID="f476712d-366a-4948-b282-66660a6d81c4" containerID="8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7" exitCode=0 Jan 20 15:07:31 crc kubenswrapper[4949]: I0120 15:07:31.581269 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerDied","Data":"8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7"} Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.532428 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.593822 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") pod \"f476712d-366a-4948-b282-66660a6d81c4\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.593950 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") pod \"f476712d-366a-4948-b282-66660a6d81c4\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.593988 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") pod \"f476712d-366a-4948-b282-66660a6d81c4\" (UID: \"f476712d-366a-4948-b282-66660a6d81c4\") " Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.613363 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f476712d-366a-4948-b282-66660a6d81c4" (UID: "f476712d-366a-4948-b282-66660a6d81c4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.614240 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh" (OuterVolumeSpecName: "kube-api-access-xn6jh") pod "f476712d-366a-4948-b282-66660a6d81c4" (UID: "f476712d-366a-4948-b282-66660a6d81c4"). InnerVolumeSpecName "kube-api-access-xn6jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.614940 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lm4wz" event={"ID":"f476712d-366a-4948-b282-66660a6d81c4","Type":"ContainerDied","Data":"a7bdb1a05ecb96436eaee5571c55a1026eac70b28bfa92211ab6b3111805bc2c"} Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.615006 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bdb1a05ecb96436eaee5571c55a1026eac70b28bfa92211ab6b3111805bc2c" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.615095 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lm4wz" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.637441 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f476712d-366a-4948-b282-66660a6d81c4" (UID: "f476712d-366a-4948-b282-66660a6d81c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.696448 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.696725 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f476712d-366a-4948-b282-66660a6d81c4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.696738 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6jh\" (UniqueName: \"kubernetes.io/projected/f476712d-366a-4948-b282-66660a6d81c4-kube-api-access-xn6jh\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.910137 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56bfc57b96-w7nhj"] Jan 20 15:07:33 crc kubenswrapper[4949]: E0120 15:07:33.910468 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f476712d-366a-4948-b282-66660a6d81c4" containerName="barbican-db-sync" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.910484 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f476712d-366a-4948-b282-66660a6d81c4" containerName="barbican-db-sync" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.910672 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f476712d-366a-4948-b282-66660a6d81c4" containerName="barbican-db-sync" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.911468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.913751 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.943895 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84d486fc9-sgwzr"] Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.945214 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.947639 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 20 15:07:33 crc kubenswrapper[4949]: I0120 15:07:33.993469 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56bfc57b96-w7nhj"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021396 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data-custom\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021456 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-combined-ca-bundle\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021554 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-logs\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.021942 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmvp\" (UniqueName: \"kubernetes.io/projected/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-kube-api-access-hsmvp\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.086741 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d486fc9-sgwzr"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.105657 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.107431 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.116025 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125099 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmvp\" (UniqueName: \"kubernetes.io/projected/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-kube-api-access-hsmvp\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125233 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125264 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125307 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-combined-ca-bundle\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125347 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data-custom\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125370 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-combined-ca-bundle\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125399 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125450 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125474 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fd28\" (UniqueName: \"kubernetes.io/projected/0f7e061d-75da-4fc4-80c8-1163e314ebb5-kube-api-access-7fd28\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.125505 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-logs\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126155 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-logs\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data-custom\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126290 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.126336 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7e061d-75da-4fc4-80c8-1163e314ebb5-logs\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.132086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-combined-ca-bundle\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.132580 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.159924 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-config-data-custom\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.161569 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.162868 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.168356 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.177985 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmvp\" (UniqueName: \"kubernetes.io/projected/02b718a3-85a6-4bb6-9e17-9ff6936cb5c4-kube-api-access-hsmvp\") pod \"barbican-keystone-listener-56bfc57b96-w7nhj\" (UID: \"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4\") " pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.188087 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.227960 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228054 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228089 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228115 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228140 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228173 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-combined-ca-bundle\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228241 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228266 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228285 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228305 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fd28\" (UniqueName: \"kubernetes.io/projected/0f7e061d-75da-4fc4-80c8-1163e314ebb5-kube-api-access-7fd28\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228339 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data-custom\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228401 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7e061d-75da-4fc4-80c8-1163e314ebb5-logs\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.228803 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7e061d-75da-4fc4-80c8-1163e314ebb5-logs\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.229610 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.231667 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.232315 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.233386 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.233745 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.237508 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data-custom\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.237650 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-combined-ca-bundle\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.238138 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7e061d-75da-4fc4-80c8-1163e314ebb5-config-data\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.247672 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fd28\" (UniqueName: \"kubernetes.io/projected/0f7e061d-75da-4fc4-80c8-1163e314ebb5-kube-api-access-7fd28\") pod \"barbican-worker-84d486fc9-sgwzr\" (UID: \"0f7e061d-75da-4fc4-80c8-1163e314ebb5\") " pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.251076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"dnsmasq-dns-869f779d85-ljbgm\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.268865 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d486fc9-sgwzr" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.329983 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330118 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330196 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.330689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.334834 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.335475 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.335576 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.354960 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"barbican-api-7dd8f4d44-xrfpp\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.505317 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.517808 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.834801 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.890262 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56bfc57b96-w7nhj"] Jan 20 15:07:34 crc kubenswrapper[4949]: W0120 15:07:34.905642 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b718a3_85a6_4bb6_9e17_9ff6936cb5c4.slice/crio-8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3 WatchSource:0}: Error finding container 8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3: Status 404 returned error can't find the container with id 8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3 Jan 20 15:07:34 crc kubenswrapper[4949]: I0120 15:07:34.952982 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66d45cfc44-ltr94" podUID="08182d24-cea6-4daa-9dbb-efcb48b76434" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.020138 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d486fc9-sgwzr"] Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.174947 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.305295 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:35 crc kubenswrapper[4949]: W0120 15:07:35.314368 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c4987c_6ff4_4108_b5f9_6609525cf7ce.slice/crio-0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0 WatchSource:0}: Error finding container 0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0: Status 404 returned error can't find the container with id 0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.656175 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerStarted","Data":"4302ab47c368b5674e528e1d7aae1b710a2dfee3ac0f8609de5205f31154a236"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.657319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" event={"ID":"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4","Type":"ContainerStarted","Data":"8a3a97b8bf02ad14a10a6644f98834c0e4f7bc71b5a24333be0798b1c5e562d3"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.658418 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerStarted","Data":"0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.659833 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d486fc9-sgwzr" event={"ID":"0f7e061d-75da-4fc4-80c8-1163e314ebb5","Type":"ContainerStarted","Data":"649b128f3159abcbb35e72e4487070d44b730a8e1ea3e984efefd13ce20104c8"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.662908 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerStarted","Data":"757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5"} Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663100 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" containerID="cri-o://470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663153 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" containerID="cri-o://757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663121 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663167 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" containerID="cri-o://f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.663233 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" containerID="cri-o://83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a" gracePeriod=30 Jan 20 15:07:35 crc kubenswrapper[4949]: I0120 15:07:35.702501 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.793561908 podStartE2EDuration="50.702477846s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.542896097 +0000 UTC m=+1003.352726955" lastFinishedPulling="2026-01-20 15:07:34.451812035 +0000 UTC m=+1050.261642893" observedRunningTime="2026-01-20 15:07:35.68788263 +0000 UTC m=+1051.497713508" watchObservedRunningTime="2026-01-20 15:07:35.702477846 +0000 UTC m=+1051.512308694" Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672440 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5" exitCode=0 Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672787 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d" exitCode=2 Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672796 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668" exitCode=0 Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672619 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5"} Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d"} Jan 20 15:07:36 crc kubenswrapper[4949]: I0120 15:07:36.672834 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.064087 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-ffcb5df54-fhbnh"] Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.066671 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.069764 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.071959 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.077939 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ffcb5df54-fhbnh"] Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094558 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data-custom\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-internal-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-combined-ca-bundle\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094750 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-public-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094770 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tjq\" (UniqueName: \"kubernetes.io/projected/25689957-1a77-40ab-8a4c-1e40a1524bac-kube-api-access-l6tjq\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.094798 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25689957-1a77-40ab-8a4c-1e40a1524bac-logs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.195934 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-combined-ca-bundle\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.195996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-public-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196033 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tjq\" (UniqueName: \"kubernetes.io/projected/25689957-1a77-40ab-8a4c-1e40a1524bac-kube-api-access-l6tjq\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25689957-1a77-40ab-8a4c-1e40a1524bac-logs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196144 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196174 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data-custom\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.196191 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-internal-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.197118 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25689957-1a77-40ab-8a4c-1e40a1524bac-logs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.200856 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-public-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.201730 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-combined-ca-bundle\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.201978 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data-custom\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.203223 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-internal-tls-certs\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.205045 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25689957-1a77-40ab-8a4c-1e40a1524bac-config-data\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.212891 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tjq\" (UniqueName: \"kubernetes.io/projected/25689957-1a77-40ab-8a4c-1e40a1524bac-kube-api-access-l6tjq\") pod \"barbican-api-ffcb5df54-fhbnh\" (UID: \"25689957-1a77-40ab-8a4c-1e40a1524bac\") " pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.388118 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.683833 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerStarted","Data":"aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692461 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerStarted","Data":"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692503 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerStarted","Data":"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692548 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.692584 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.694927 4949 generic.go:334] "Generic (PLEG): container finished" podID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" exitCode=0 Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.694961 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerDied","Data":"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3"} Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.710070 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2fwjt" podStartSLOduration=5.417277489 podStartE2EDuration="52.710045392s" podCreationTimestamp="2026-01-20 15:06:45 +0000 UTC" firstStartedPulling="2026-01-20 15:06:47.141367477 +0000 UTC m=+1002.951198335" lastFinishedPulling="2026-01-20 15:07:34.43413538 +0000 UTC m=+1050.243966238" observedRunningTime="2026-01-20 15:07:37.702868333 +0000 UTC m=+1053.512699191" watchObservedRunningTime="2026-01-20 15:07:37.710045392 +0000 UTC m=+1053.519876250" Jan 20 15:07:37 crc kubenswrapper[4949]: I0120 15:07:37.732743 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podStartSLOduration=3.732717197 podStartE2EDuration="3.732717197s" podCreationTimestamp="2026-01-20 15:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:37.724741842 +0000 UTC m=+1053.534572710" watchObservedRunningTime="2026-01-20 15:07:37.732717197 +0000 UTC m=+1053.542548055" Jan 20 15:07:38 crc kubenswrapper[4949]: I0120 15:07:38.354848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-ffcb5df54-fhbnh"] Jan 20 15:07:38 crc kubenswrapper[4949]: W0120 15:07:38.576665 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25689957_1a77_40ab_8a4c_1e40a1524bac.slice/crio-2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c WatchSource:0}: Error finding container 2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c: Status 404 returned error can't find the container with id 2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c Jan 20 15:07:38 crc kubenswrapper[4949]: I0120 15:07:38.704653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ffcb5df54-fhbnh" event={"ID":"25689957-1a77-40ab-8a4c-1e40a1524bac","Type":"ContainerStarted","Data":"2c813b210f8d3888ffd0f0d2492b2b0ec8fb8d37fc956835eabf8f9d9b0e056c"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.714377 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d486fc9-sgwzr" event={"ID":"0f7e061d-75da-4fc4-80c8-1163e314ebb5","Type":"ContainerStarted","Data":"052edb9be82749cb28eff78e8b63c5a2cac63350ba500c0c50dd0f85b7c1da40"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.714968 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d486fc9-sgwzr" event={"ID":"0f7e061d-75da-4fc4-80c8-1163e314ebb5","Type":"ContainerStarted","Data":"6f489479e016971ffe2a3d2ede4b55aa5ce8f7db149e5b078813074560bced9c"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.715917 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ffcb5df54-fhbnh" event={"ID":"25689957-1a77-40ab-8a4c-1e40a1524bac","Type":"ContainerStarted","Data":"9e61e01af472bf05d6d7c40fadd70ed45403f8ec58aa8dc6982f947334ed6e8e"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.715948 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-ffcb5df54-fhbnh" event={"ID":"25689957-1a77-40ab-8a4c-1e40a1524bac","Type":"ContainerStarted","Data":"a7821e5a97cb61b41b5f3b0d62d2b6b6ef692607878a720c03e99fb5afb39090"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.715991 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.716031 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.717229 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerStarted","Data":"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.717338 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.719279 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" event={"ID":"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4","Type":"ContainerStarted","Data":"477a2ea5f293cd656ceee7c1a14b62bd3ba78bc275f987d9d445f6e4baae801b"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.719320 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" event={"ID":"02b718a3-85a6-4bb6-9e17-9ff6936cb5c4","Type":"ContainerStarted","Data":"5019b346386e48fc9c1b559559e0adc8b49ae61a7841975c4ba132f1883eae38"} Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.734130 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84d486fc9-sgwzr" podStartSLOduration=2.860268559 podStartE2EDuration="6.734106055s" podCreationTimestamp="2026-01-20 15:07:33 +0000 UTC" firstStartedPulling="2026-01-20 15:07:35.067409895 +0000 UTC m=+1050.877240753" lastFinishedPulling="2026-01-20 15:07:38.941247371 +0000 UTC m=+1054.751078249" observedRunningTime="2026-01-20 15:07:39.730720397 +0000 UTC m=+1055.540551265" watchObservedRunningTime="2026-01-20 15:07:39.734106055 +0000 UTC m=+1055.543936913" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.750416 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56bfc57b96-w7nhj" podStartSLOduration=2.713776597 podStartE2EDuration="6.750395985s" podCreationTimestamp="2026-01-20 15:07:33 +0000 UTC" firstStartedPulling="2026-01-20 15:07:34.914465458 +0000 UTC m=+1050.724296316" lastFinishedPulling="2026-01-20 15:07:38.951084846 +0000 UTC m=+1054.760915704" observedRunningTime="2026-01-20 15:07:39.747297556 +0000 UTC m=+1055.557128444" watchObservedRunningTime="2026-01-20 15:07:39.750395985 +0000 UTC m=+1055.560226843" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.769395 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" podStartSLOduration=6.769376382 podStartE2EDuration="6.769376382s" podCreationTimestamp="2026-01-20 15:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:39.765118876 +0000 UTC m=+1055.574949734" watchObservedRunningTime="2026-01-20 15:07:39.769376382 +0000 UTC m=+1055.579207240" Jan 20 15:07:39 crc kubenswrapper[4949]: I0120 15:07:39.805751 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-ffcb5df54-fhbnh" podStartSLOduration=2.805707412 podStartE2EDuration="2.805707412s" podCreationTimestamp="2026-01-20 15:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:39.796357044 +0000 UTC m=+1055.606187902" watchObservedRunningTime="2026-01-20 15:07:39.805707412 +0000 UTC m=+1055.615538270" Jan 20 15:07:40 crc kubenswrapper[4949]: I0120 15:07:40.742645 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerID="83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a" exitCode=0 Jan 20 15:07:40 crc kubenswrapper[4949]: I0120 15:07:40.742905 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a"} Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.180665 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.321788 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.321851 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322011 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322084 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322118 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322227 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322270 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") pod \"d4755b36-8e78-4503-aa84-efb904d6e6d9\" (UID: \"d4755b36-8e78-4503-aa84-efb904d6e6d9\") " Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322454 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322728 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.322816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.327415 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg" (OuterVolumeSpecName: "kube-api-access-q2fmg") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "kube-api-access-q2fmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.327624 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts" (OuterVolumeSpecName: "scripts") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.348582 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.398482 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425287 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425321 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425333 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fmg\" (UniqueName: \"kubernetes.io/projected/d4755b36-8e78-4503-aa84-efb904d6e6d9-kube-api-access-q2fmg\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425345 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.425353 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4755b36-8e78-4503-aa84-efb904d6e6d9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.436465 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data" (OuterVolumeSpecName: "config-data") pod "d4755b36-8e78-4503-aa84-efb904d6e6d9" (UID: "d4755b36-8e78-4503-aa84-efb904d6e6d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.526848 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4755b36-8e78-4503-aa84-efb904d6e6d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.753733 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4755b36-8e78-4503-aa84-efb904d6e6d9","Type":"ContainerDied","Data":"d9198a98b9f1f6021caa331f5093846a0dd1690786dc4510142a57f8e1848ff4"} Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.753787 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.753813 4949 scope.go:117] "RemoveContainer" containerID="757d099d9ebadb0d305a43cbe75c68b54b2df410ee0d265f3593e31f1aa349c5" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.792355 4949 scope.go:117] "RemoveContainer" containerID="f848550709cc6d2dfdf342a1d4f2aa0b203ceaf6fe9847e84fe8825fdd98816d" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.798382 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.811060 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.819289 4949 scope.go:117] "RemoveContainer" containerID="83f678b1700c3dce1f11a569652779cebf40be71cbb04abb9a22cf407e99af5a" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847230 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847578 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847593 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847607 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847613 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847624 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847631 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: E0120 15:07:41.847657 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847662 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847815 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="proxy-httpd" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847824 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-central-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847837 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="ceilometer-notification-agent" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.847844 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" containerName="sg-core" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.848941 4949 scope.go:117] "RemoveContainer" containerID="470d690bc991b849f235bbacecda281ae4377026410bb1cdc476740edd48c668" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.854155 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.858643 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.858899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:07:41 crc kubenswrapper[4949]: I0120 15:07:41.871348 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.039387 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.039884 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.039926 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040008 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040033 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.040162 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142417 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142470 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142507 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142706 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142758 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.142787 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.143284 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.145154 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.149402 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.150393 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.150671 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.151955 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.166890 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"ceilometer-0\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.183754 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:07:42 crc kubenswrapper[4949]: W0120 15:07:42.617105 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5dc0c3_1563_4605_81e6_2ed8a343353b.slice/crio-a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74 WatchSource:0}: Error finding container a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74: Status 404 returned error can't find the container with id a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74 Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.617995 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.762452 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74"} Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.765821 4949 generic.go:334] "Generic (PLEG): container finished" podID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerID="aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad" exitCode=0 Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.765915 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerDied","Data":"aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad"} Jan 20 15:07:42 crc kubenswrapper[4949]: I0120 15:07:42.801734 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4755b36-8e78-4503-aa84-efb904d6e6d9" path="/var/lib/kubelet/pods/d4755b36-8e78-4503-aa84-efb904d6e6d9/volumes" Jan 20 15:07:43 crc kubenswrapper[4949]: I0120 15:07:43.780922 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.186067 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.280838 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281152 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281191 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281226 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") pod \"c18369cb-0b5b-40f7-bc73-af04fb510f31\" (UID: \"c18369cb-0b5b-40f7-bc73-af04fb510f31\") " Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.281730 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.286600 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd" (OuterVolumeSpecName: "kube-api-access-v4htd") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "kube-api-access-v4htd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.286617 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts" (OuterVolumeSpecName: "scripts") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.289561 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.313993 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.345696 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data" (OuterVolumeSpecName: "config-data") pod "c18369cb-0b5b-40f7-bc73-af04fb510f31" (UID: "c18369cb-0b5b-40f7-bc73-af04fb510f31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383067 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383119 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c18369cb-0b5b-40f7-bc73-af04fb510f31-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383131 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383142 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383153 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c18369cb-0b5b-40f7-bc73-af04fb510f31-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.383164 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4htd\" (UniqueName: \"kubernetes.io/projected/c18369cb-0b5b-40f7-bc73-af04fb510f31-kube-api-access-v4htd\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.507611 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.591889 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.592125 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" containerID="cri-o://65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a" gracePeriod=10 Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828229 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828495 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2fwjt" event={"ID":"c18369cb-0b5b-40f7-bc73-af04fb510f31","Type":"ContainerDied","Data":"433534aab58a8907724519ebbdb734c9b17b626693f00598ad129acc054d365a"} Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828535 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433534aab58a8907724519ebbdb734c9b17b626693f00598ad129acc054d365a" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.828597 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2fwjt" Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.869781 4949 generic.go:334] "Generic (PLEG): container finished" podID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerID="65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a" exitCode=0 Jan 20 15:07:44 crc kubenswrapper[4949]: I0120 15:07:44.869826 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerDied","Data":"65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.105751 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:07:45 crc kubenswrapper[4949]: E0120 15:07:45.106790 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerName="cinder-db-sync" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.106828 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerName="cinder-db-sync" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.107062 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" containerName="cinder-db-sync" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.108199 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.134772 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.137134 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.139091 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6qnbk" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.139563 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.139941 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.155036 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.185009 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.199774 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207830 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207923 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.207981 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311546 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311579 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311635 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311662 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311735 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.311780 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.312839 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.313354 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.313370 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.313506 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.321383 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.323363 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.332891 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.338205 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.372727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"dnsmasq-dns-58db5546cc-8g4zv\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415789 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415911 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.415984 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416028 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416129 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416179 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416200 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416268 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.416300 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.419818 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.429841 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.432463 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.433132 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.444128 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.449466 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"cinder-scheduler-0\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.455537 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.494963 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521582 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521643 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521659 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521698 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521780 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.521835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.525355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.525905 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.528868 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.529012 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.530571 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.543063 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.548866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"cinder-api-0\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.671354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.684762 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829301 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829454 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829604 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.829754 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") pod \"b0cd5b2d-6321-4992-be2e-5926f77e0790\" (UID: \"b0cd5b2d-6321-4992-be2e-5926f77e0790\") " Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.835129 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm" (OuterVolumeSpecName: "kube-api-access-gxwzm") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "kube-api-access-gxwzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.876865 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.884674 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" event={"ID":"b0cd5b2d-6321-4992-be2e-5926f77e0790","Type":"ContainerDied","Data":"7264a419821a7cd4155fa26254f761dbcc032333908b45daed1c6c1c517da1c9"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.884734 4949 scope.go:117] "RemoveContainer" containerID="65a376aa2edfbae52414de468d55bd7f13bcd210341533dde87c867951ba8e8a" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.884854 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-v55wx" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.891205 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903135 4949 generic.go:334] "Generic (PLEG): container finished" podID="f1a77932-734e-416b-a182-5e84f6749d95" containerID="6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903168 4949 generic.go:334] "Generic (PLEG): container finished" podID="f1a77932-734e-416b-a182-5e84f6749d95" containerID="de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerDied","Data":"6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.903241 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerDied","Data":"de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.905286 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.921721 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925506 4949 generic.go:334] "Generic (PLEG): container finished" podID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerID="1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925635 4949 generic.go:334] "Generic (PLEG): container finished" podID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerID="893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec" exitCode=137 Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925580 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerDied","Data":"1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.925822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerDied","Data":"893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec"} Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.940937 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.943088 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.943299 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxwzm\" (UniqueName: \"kubernetes.io/projected/b0cd5b2d-6321-4992-be2e-5926f77e0790-kube-api-access-gxwzm\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.943380 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.961272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config" (OuterVolumeSpecName: "config") pod "b0cd5b2d-6321-4992-be2e-5926f77e0790" (UID: "b0cd5b2d-6321-4992-be2e-5926f77e0790"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:45 crc kubenswrapper[4949]: I0120 15:07:45.986662 4949 scope.go:117] "RemoveContainer" containerID="8c227b56e33a53d202583a4f1ddca6603645856cbfcd9ad6c053606a3845fa21" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.045528 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0cd5b2d-6321-4992-be2e-5926f77e0790-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.131125 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.216820 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.237220 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.237643 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264395 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264612 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264657 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.264733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") pod \"f1a77932-734e-416b-a182-5e84f6749d95\" (UID: \"f1a77932-734e-416b-a182-5e84f6749d95\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.274466 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.277819 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs" (OuterVolumeSpecName: "logs") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.281544 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-v55wx"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.281743 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5" (OuterVolumeSpecName: "kube-api-access-rxxd5") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "kube-api-access-rxxd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.309913 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data" (OuterVolumeSpecName: "config-data") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.357011 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts" (OuterVolumeSpecName: "scripts") pod "f1a77932-734e-416b-a182-5e84f6749d95" (UID: "f1a77932-734e-416b-a182-5e84f6749d95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378085 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378124 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378308 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") pod \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\" (UID: \"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964\") " Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378709 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxxd5\" (UniqueName: \"kubernetes.io/projected/f1a77932-734e-416b-a182-5e84f6749d95-kube-api-access-rxxd5\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378726 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1a77932-734e-416b-a182-5e84f6749d95-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378735 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378742 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1a77932-734e-416b-a182-5e84f6749d95-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.378750 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1a77932-734e-416b-a182-5e84f6749d95-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.379125 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs" (OuterVolumeSpecName: "logs") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.388768 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5" (OuterVolumeSpecName: "kube-api-access-sjtj5") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "kube-api-access-sjtj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.403780 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.419608 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data" (OuterVolumeSpecName: "config-data") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.433215 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts" (OuterVolumeSpecName: "scripts") pod "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" (UID: "8fe9012c-f3f8-4a8a-b6d0-1acc0120b964"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.479961 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.479998 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjtj5\" (UniqueName: \"kubernetes.io/projected/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-kube-api-access-sjtj5\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.480008 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.480017 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.480028 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.575971 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.616501 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.643813 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.818696 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" path="/var/lib/kubelet/pods/b0cd5b2d-6321-4992-be2e-5926f77e0790/volumes" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.965743 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbd48cfd5-mt6hk" event={"ID":"8fe9012c-f3f8-4a8a-b6d0-1acc0120b964","Type":"ContainerDied","Data":"d010c875444d4ba584246f10f1a99b66845b15ef9ef3b2384373a0f15b7f64f0"} Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.966041 4949 scope.go:117] "RemoveContainer" containerID="1f04f637992477c60405ab1d7ada7b6637ec4ddb3f82a81040c409522e0a028b" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.966149 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbd48cfd5-mt6hk" Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.983313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerStarted","Data":"531ef9b2d0a73d666ccd1d688de60c4cf9fa28d82e193bd4da5c061157252815"} Jan 20 15:07:46 crc kubenswrapper[4949]: I0120 15:07:46.990490 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerStarted","Data":"a21906d0080002b80059b53f2425557bdbad30ceb5fad9442aae6757d6585801"} Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:46.998581 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.014733 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cbd48cfd5-mt6hk"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.027583 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerStarted","Data":"052fba35240bac70130e0cfdaa3376b77a051a12b8b97d00e1f00afe30ca5b57"} Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.053321 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68ccd6ddcc-h9gfp" event={"ID":"f1a77932-734e-416b-a182-5e84f6749d95","Type":"ContainerDied","Data":"b57ca263b5ed36cc263a490d1635d52d45b0e97e2657741bc23688196ca5d55f"} Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.053431 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68ccd6ddcc-h9gfp" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.103318 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.112397 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68ccd6ddcc-h9gfp"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.495753 4949 scope.go:117] "RemoveContainer" containerID="893ca4e88a3d7c27ac812e7db5892668ab848d3c2c12415cf692a28890920bec" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.532116 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.544670 4949 scope.go:117] "RemoveContainer" containerID="6317f0514460d8355ad2d0bf31c83d4c0dcd6cee56a29a7c11f650564fb22ae6" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.756787 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.841387 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.974464 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:47 crc kubenswrapper[4949]: I0120 15:07:47.975724 4949 scope.go:117] "RemoveContainer" containerID="de5fbdf33a5ad516effda359202e2632e7c5407708538e4d08854ab6fe4a5427" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.031948 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.085199 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerStarted","Data":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.086597 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.105903 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerStarted","Data":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.132972 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.997727787 podStartE2EDuration="7.132954856s" podCreationTimestamp="2026-01-20 15:07:41 +0000 UTC" firstStartedPulling="2026-01-20 15:07:42.619948494 +0000 UTC m=+1058.429779362" lastFinishedPulling="2026-01-20 15:07:46.755175573 +0000 UTC m=+1062.565006431" observedRunningTime="2026-01-20 15:07:48.120548509 +0000 UTC m=+1063.930379367" watchObservedRunningTime="2026-01-20 15:07:48.132954856 +0000 UTC m=+1063.942785714" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.134324 4949 generic.go:334] "Generic (PLEG): container finished" podID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" exitCode=0 Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.134389 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerDied","Data":"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.134408 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerStarted","Data":"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b"} Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.135329 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.163419 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" podStartSLOduration=3.163405179 podStartE2EDuration="3.163405179s" podCreationTimestamp="2026-01-20 15:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:48.160443494 +0000 UTC m=+1063.970274352" watchObservedRunningTime="2026-01-20 15:07:48.163405179 +0000 UTC m=+1063.973236037" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.212828 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6b8cd78967-6cmpj" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.273403 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.274146 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56bb6988d6-9n8x4" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" containerID="cri-o://bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48" gracePeriod=30 Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.274744 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56bb6988d6-9n8x4" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" containerID="cri-o://5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391" gracePeriod=30 Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.802744 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" path="/var/lib/kubelet/pods/8fe9012c-f3f8-4a8a-b6d0-1acc0120b964/volumes" Jan 20 15:07:48 crc kubenswrapper[4949]: I0120 15:07:48.803376 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a77932-734e-416b-a182-5e84f6749d95" path="/var/lib/kubelet/pods/f1a77932-734e-416b-a182-5e84f6749d95/volumes" Jan 20 15:07:49 crc kubenswrapper[4949]: I0120 15:07:49.162203 4949 generic.go:334] "Generic (PLEG): container finished" podID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerID="5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391" exitCode=0 Jan 20 15:07:49 crc kubenswrapper[4949]: I0120 15:07:49.162285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerDied","Data":"5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.094314 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.174410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerStarted","Data":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.174593 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" containerID="cri-o://f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.174881 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.175268 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" containerID="cri-o://6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.199610 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerStarted","Data":"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.199683 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerStarted","Data":"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603"} Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.231670 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.231647343 podStartE2EDuration="5.231647343s" podCreationTimestamp="2026-01-20 15:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:50.194731744 +0000 UTC m=+1066.004562622" watchObservedRunningTime="2026-01-20 15:07:50.231647343 +0000 UTC m=+1066.041478201" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.237287 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.6611253809999997 podStartE2EDuration="5.237264052s" podCreationTimestamp="2026-01-20 15:07:45 +0000 UTC" firstStartedPulling="2026-01-20 15:07:46.618535797 +0000 UTC m=+1062.428366655" lastFinishedPulling="2026-01-20 15:07:48.194674468 +0000 UTC m=+1064.004505326" observedRunningTime="2026-01-20 15:07:50.227018315 +0000 UTC m=+1066.036849173" watchObservedRunningTime="2026-01-20 15:07:50.237264052 +0000 UTC m=+1066.047094910" Jan 20 15:07:50 crc kubenswrapper[4949]: E0120 15:07:50.320462 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a6ab06_8876_4497_9f92_ad1d32c55d9c.slice/crio-conmon-f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.496326 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.529156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66d45cfc44-ltr94" Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.606364 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.607664 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" containerID="cri-o://89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.608085 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" containerID="cri-o://03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74" gracePeriod=30 Jan 20 15:07:50 crc kubenswrapper[4949]: I0120 15:07:50.809298 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.016332 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210033 4949 generic.go:334] "Generic (PLEG): container finished" podID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" exitCode=0 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210077 4949 generic.go:334] "Generic (PLEG): container finished" podID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" exitCode=143 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210113 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210188 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerDied","Data":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210226 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerDied","Data":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210241 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a6ab06-8876-4497-9f92-ad1d32c55d9c","Type":"ContainerDied","Data":"531ef9b2d0a73d666ccd1d688de60c4cf9fa28d82e193bd4da5c061157252815"} Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210260 4949 scope.go:117] "RemoveContainer" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210385 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210488 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.210631 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211038 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211152 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs" (OuterVolumeSpecName: "logs") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211571 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211663 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.211709 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") pod \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\" (UID: \"e2a6ab06-8876-4497-9f92-ad1d32c55d9c\") " Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.213184 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.213387 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.227675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7" (OuterVolumeSpecName: "kube-api-access-zdvb7") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "kube-api-access-zdvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.227978 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.229503 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts" (OuterVolumeSpecName: "scripts") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.312633 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316232 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316259 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316270 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.316282 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdvb7\" (UniqueName: \"kubernetes.io/projected/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-kube-api-access-zdvb7\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.335624 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data" (OuterVolumeSpecName: "config-data") pod "e2a6ab06-8876-4497-9f92-ad1d32c55d9c" (UID: "e2a6ab06-8876-4497-9f92-ad1d32c55d9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.362202 4949 scope.go:117] "RemoveContainer" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.381202 4949 scope.go:117] "RemoveContainer" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.381706 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": container with ID starting with 6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8 not found: ID does not exist" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.381765 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} err="failed to get container status \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": rpc error: code = NotFound desc = could not find container \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": container with ID starting with 6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8 not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.381788 4949 scope.go:117] "RemoveContainer" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.382087 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": container with ID starting with f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a not found: ID does not exist" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382152 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} err="failed to get container status \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": rpc error: code = NotFound desc = could not find container \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": container with ID starting with f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382177 4949 scope.go:117] "RemoveContainer" containerID="6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382406 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8"} err="failed to get container status \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": rpc error: code = NotFound desc = could not find container \"6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8\": container with ID starting with 6416d26e2ad97404fdb81c62b4160fb698a173fe40011454e936ad697eb30ed8 not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382423 4949 scope.go:117] "RemoveContainer" containerID="f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.382639 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a"} err="failed to get container status \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": rpc error: code = NotFound desc = could not find container \"f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a\": container with ID starting with f509b1ce7a1c5f0737f360beedf4b5b58c8a140b6575e5cbe40571c21818388a not found: ID does not exist" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.417746 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a6ab06-8876-4497-9f92-ad1d32c55d9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.516658 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-ffcb5df54-fhbnh" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.591411 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.600595 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.614615 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.614826 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" containerID="cri-o://5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" gracePeriod=30 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.615166 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" containerID="cri-o://4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" gracePeriod=30 Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.632893 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": EOF" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633093 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633438 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633448 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633477 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633483 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633495 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633501 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633509 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633529 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633539 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633544 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633556 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="init" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633561 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="init" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633571 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633576 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" Jan 20 15:07:51 crc kubenswrapper[4949]: E0120 15:07:51.633584 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633590 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633734 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633761 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cd5b2d-6321-4992-be2e-5926f77e0790" containerName="dnsmasq-dns" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633777 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633787 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api-log" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633804 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fe9012c-f3f8-4a8a-b6d0-1acc0120b964" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633822 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" containerName="cinder-api" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.633835 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a77932-734e-416b-a182-5e84f6749d95" containerName="horizon" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.634709 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.639478 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.640036 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.640603 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.660052 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723573 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723718 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72vn\" (UniqueName: \"kubernetes.io/projected/605e8425-f80d-4cd4-981d-afb431ec676f-kube-api-access-p72vn\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723798 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-scripts\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723856 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/605e8425-f80d-4cd4-981d-afb431ec676f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723946 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data-custom\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723973 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.723993 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.724023 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/605e8425-f80d-4cd4-981d-afb431ec676f-logs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.724116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825719 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825809 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72vn\" (UniqueName: \"kubernetes.io/projected/605e8425-f80d-4cd4-981d-afb431ec676f-kube-api-access-p72vn\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-scripts\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825886 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/605e8425-f80d-4cd4-981d-afb431ec676f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825919 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data-custom\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825937 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825950 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.825973 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/605e8425-f80d-4cd4-981d-afb431ec676f-logs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.826017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.826779 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/605e8425-f80d-4cd4-981d-afb431ec676f-logs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.826833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/605e8425-f80d-4cd4-981d-afb431ec676f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.830587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-scripts\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.831026 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.831565 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.838106 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.838199 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-config-data-custom\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.845901 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72vn\" (UniqueName: \"kubernetes.io/projected/605e8425-f80d-4cd4-981d-afb431ec676f-kube-api-access-p72vn\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.847014 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605e8425-f80d-4cd4-981d-afb431ec676f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"605e8425-f80d-4cd4-981d-afb431ec676f\") " pod="openstack/cinder-api-0" Jan 20 15:07:51 crc kubenswrapper[4949]: I0120 15:07:51.972580 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.228231 4949 generic.go:334] "Generic (PLEG): container finished" podID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerID="bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48" exitCode=0 Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.228292 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerDied","Data":"bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48"} Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.231863 4949 generic.go:334] "Generic (PLEG): container finished" podID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" exitCode=143 Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.231906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerDied","Data":"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a"} Jan 20 15:07:52 crc kubenswrapper[4949]: W0120 15:07:52.444395 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605e8425_f80d_4cd4_981d_afb431ec676f.slice/crio-f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da WatchSource:0}: Error finding container f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da: Status 404 returned error can't find the container with id f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.446886 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.449147 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638473 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638539 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638567 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638601 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.638684 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") pod \"98759ef1-a1b3-414c-8131-cbdb90833a60\" (UID: \"98759ef1-a1b3-414c-8131-cbdb90833a60\") " Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.645386 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.665083 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz" (OuterVolumeSpecName: "kube-api-access-5b4dz") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "kube-api-access-5b4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.690232 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.692731 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config" (OuterVolumeSpecName: "config") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.712428 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "98759ef1-a1b3-414c-8131-cbdb90833a60" (UID: "98759ef1-a1b3-414c-8131-cbdb90833a60"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.739995 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740269 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b4dz\" (UniqueName: \"kubernetes.io/projected/98759ef1-a1b3-414c-8131-cbdb90833a60-kube-api-access-5b4dz\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740285 4949 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740295 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.740305 4949 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/98759ef1-a1b3-414c-8131-cbdb90833a60-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:52 crc kubenswrapper[4949]: I0120 15:07:52.805199 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a6ab06-8876-4497-9f92-ad1d32c55d9c" path="/var/lib/kubelet/pods/e2a6ab06-8876-4497-9f92-ad1d32c55d9c/volumes" Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.247727 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"605e8425-f80d-4cd4-981d-afb431ec676f","Type":"ContainerStarted","Data":"dbf190b8fc9200feb4f932ca6e3e11c3b22be8af4ed6ed1fb97690486206b2b0"} Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.247782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"605e8425-f80d-4cd4-981d-afb431ec676f","Type":"ContainerStarted","Data":"f2eec2efa113e8a3b333cb888dbc4659e1a59c69b2cdf8dfd9019b4a047558da"} Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.250342 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56bb6988d6-9n8x4" event={"ID":"98759ef1-a1b3-414c-8131-cbdb90833a60","Type":"ContainerDied","Data":"ddacbe5809c0f3426708e64d9337ca1ab93d7f38d1a8f505676198c5a7a916e0"} Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.250410 4949 scope.go:117] "RemoveContainer" containerID="5d54c87b110d4a55bd482813e142e462c9327e6babead0ba2815c834eba1f391" Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.250453 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56bb6988d6-9n8x4" Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.280532 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.290990 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56bb6988d6-9n8x4"] Jan 20 15:07:53 crc kubenswrapper[4949]: I0120 15:07:53.310251 4949 scope.go:117] "RemoveContainer" containerID="bf41115faa283ba2b33c59f5a711330fde39b564dc46b8504e4754abfddeda48" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.267554 4949 generic.go:334] "Generic (PLEG): container finished" podID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerID="03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74" exitCode=0 Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.267603 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerDied","Data":"03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74"} Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.270009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"605e8425-f80d-4cd4-981d-afb431ec676f","Type":"ContainerStarted","Data":"018e8f77b260f7ae0f2759d6b0154a2c7badd2338405087345879685208f320c"} Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.270117 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.291928 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.291906887 podStartE2EDuration="3.291906887s" podCreationTimestamp="2026-01-20 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:07:54.286410431 +0000 UTC m=+1070.096241289" watchObservedRunningTime="2026-01-20 15:07:54.291906887 +0000 UTC m=+1070.101737745" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.765050 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:36756->10.217.0.152:9311: read: connection reset by peer" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.765095 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dd8f4d44-xrfpp" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:36772->10.217.0.152:9311: read: connection reset by peer" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.807371 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" path="/var/lib/kubelet/pods/98759ef1-a1b3-414c-8131-cbdb90833a60/volumes" Jan 20 15:07:54 crc kubenswrapper[4949]: I0120 15:07:54.831707 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.227795 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280802 4949 generic.go:334] "Generic (PLEG): container finished" podID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" exitCode=0 Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280842 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dd8f4d44-xrfpp" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280877 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerDied","Data":"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2"} Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dd8f4d44-xrfpp" event={"ID":"57c4987c-6ff4-4108-b5f9-6609525cf7ce","Type":"ContainerDied","Data":"0bb8b244100ab75a4b07a3efd553356cd51f8ab435798ecaacf558c9264be9c0"} Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.280949 4949 scope.go:117] "RemoveContainer" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284489 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284672 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284763 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284814 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.284837 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") pod \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\" (UID: \"57c4987c-6ff4-4108-b5f9-6609525cf7ce\") " Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.285471 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs" (OuterVolumeSpecName: "logs") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.290689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.290757 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz" (OuterVolumeSpecName: "kube-api-access-rbdvz") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "kube-api-access-rbdvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.314266 4949 scope.go:117] "RemoveContainer" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.315197 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.355711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data" (OuterVolumeSpecName: "config-data") pod "57c4987c-6ff4-4108-b5f9-6609525cf7ce" (UID: "57c4987c-6ff4-4108-b5f9-6609525cf7ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.380627 4949 scope.go:117] "RemoveContainer" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" Jan 20 15:07:55 crc kubenswrapper[4949]: E0120 15:07:55.380965 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2\": container with ID starting with 4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2 not found: ID does not exist" containerID="4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.380999 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2"} err="failed to get container status \"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2\": rpc error: code = NotFound desc = could not find container \"4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2\": container with ID starting with 4dc06cb59810c498a60bdec2876bd6e154085319a68098cf79f9d5a38fb91ac2 not found: ID does not exist" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.381023 4949 scope.go:117] "RemoveContainer" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" Jan 20 15:07:55 crc kubenswrapper[4949]: E0120 15:07:55.381349 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a\": container with ID starting with 5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a not found: ID does not exist" containerID="5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.381385 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a"} err="failed to get container status \"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a\": rpc error: code = NotFound desc = could not find container \"5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a\": container with ID starting with 5cf386e30a3275f221292651c36994b5d41cddb4535fe5cf95fd6dd1b913ca3a not found: ID does not exist" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387747 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c4987c-6ff4-4108-b5f9-6609525cf7ce-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387796 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387813 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbdvz\" (UniqueName: \"kubernetes.io/projected/57c4987c-6ff4-4108-b5f9-6609525cf7ce-kube-api-access-rbdvz\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387826 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.387838 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c4987c-6ff4-4108-b5f9-6609525cf7ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.458840 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.516098 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.517018 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" containerID="cri-o://28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" gracePeriod=10 Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.614924 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.624315 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dd8f4d44-xrfpp"] Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.757259 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 15:07:55 crc kubenswrapper[4949]: I0120 15:07:55.803076 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.090637 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.200914 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.200998 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.201021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.201103 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.201151 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") pod \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\" (UID: \"f5bea5c0-8837-4f65-8bd5-40d0d8201410\") " Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.213851 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm" (OuterVolumeSpecName: "kube-api-access-zqqrm") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "kube-api-access-zqqrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.253092 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.258717 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config" (OuterVolumeSpecName: "config") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.262036 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.265935 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5bea5c0-8837-4f65-8bd5-40d0d8201410" (UID: "f5bea5c0-8837-4f65-8bd5-40d0d8201410"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.290582 4949 generic.go:334] "Generic (PLEG): container finished" podID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" exitCode=0 Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.290777 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" containerID="cri-o://ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" gracePeriod=30 Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291715 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291743 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerDied","Data":"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4"} Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291774 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ljbgm" event={"ID":"f5bea5c0-8837-4f65-8bd5-40d0d8201410","Type":"ContainerDied","Data":"4302ab47c368b5674e528e1d7aae1b710a2dfee3ac0f8609de5205f31154a236"} Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.291814 4949 scope.go:117] "RemoveContainer" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.292797 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" containerID="cri-o://17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" gracePeriod=30 Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303281 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303453 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqqrm\" (UniqueName: \"kubernetes.io/projected/f5bea5c0-8837-4f65-8bd5-40d0d8201410-kube-api-access-zqqrm\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303534 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303677 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.303752 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5bea5c0-8837-4f65-8bd5-40d0d8201410-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.326348 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.334667 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ljbgm"] Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.344733 4949 scope.go:117] "RemoveContainer" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.376462 4949 scope.go:117] "RemoveContainer" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" Jan 20 15:07:56 crc kubenswrapper[4949]: E0120 15:07:56.377015 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4\": container with ID starting with 28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4 not found: ID does not exist" containerID="28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.377048 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4"} err="failed to get container status \"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4\": rpc error: code = NotFound desc = could not find container \"28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4\": container with ID starting with 28c86fe69399ecba925e251c001a854f6a77c92e7cb6575d3720b957e41efad4 not found: ID does not exist" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.377072 4949 scope.go:117] "RemoveContainer" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" Jan 20 15:07:56 crc kubenswrapper[4949]: E0120 15:07:56.380672 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3\": container with ID starting with 75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3 not found: ID does not exist" containerID="75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.380730 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3"} err="failed to get container status \"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3\": rpc error: code = NotFound desc = could not find container \"75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3\": container with ID starting with 75a04b3ce94af55227b811c6317dce5c42a39bbeb9f67dce86d10ef112dd95e3 not found: ID does not exist" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.798864 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" path="/var/lib/kubelet/pods/57c4987c-6ff4-4108-b5f9-6609525cf7ce/volumes" Jan 20 15:07:56 crc kubenswrapper[4949]: I0120 15:07:56.799653 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" path="/var/lib/kubelet/pods/f5bea5c0-8837-4f65-8bd5-40d0d8201410/volumes" Jan 20 15:07:57 crc kubenswrapper[4949]: I0120 15:07:57.302718 4949 generic.go:334] "Generic (PLEG): container finished" podID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" exitCode=0 Jan 20 15:07:57 crc kubenswrapper[4949]: I0120 15:07:57.302869 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerDied","Data":"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603"} Jan 20 15:07:58 crc kubenswrapper[4949]: I0120 15:07:58.212890 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:58 crc kubenswrapper[4949]: I0120 15:07:58.236947 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-754d6d4c8d-v7txj" Jan 20 15:07:59 crc kubenswrapper[4949]: I0120 15:07:59.160134 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7b69c674cf-wdfrq" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.094048 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208216 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208324 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208340 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208424 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208571 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208600 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.208677 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") pod \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\" (UID: \"0350a5c4-7eb7-42bb-a72e-28b120f08f7a\") " Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.209141 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.214849 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz" (OuterVolumeSpecName: "kube-api-access-79gnz") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "kube-api-access-79gnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.232668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts" (OuterVolumeSpecName: "scripts") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.234792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244196 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244710 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244727 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244742 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244750 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244763 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244772 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244782 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244789 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244806 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="init" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244814 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="init" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244829 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244837 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244861 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244870 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.244885 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.244893 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245082 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245106 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="98759ef1-a1b3-414c-8131-cbdb90833a60" containerName="neutron-httpd" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245116 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api-log" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245133 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="cinder-scheduler" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245141 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bea5c0-8837-4f65-8bd5-40d0d8201410" containerName="dnsmasq-dns" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245153 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c4987c-6ff4-4108-b5f9-6609525cf7ce" containerName="barbican-api" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245165 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerName="probe" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.245875 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.249051 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.249470 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d2p9q" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.249645 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.286010 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.296576 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312579 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312702 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312744 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312776 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722jr\" (UniqueName: \"kubernetes.io/projected/0b4f97ab-7425-4271-bd09-0e89073ebdc1-kube-api-access-722jr\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312826 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312837 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312847 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.312855 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79gnz\" (UniqueName: \"kubernetes.io/projected/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-kube-api-access-79gnz\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.339949 4949 generic.go:334] "Generic (PLEG): container finished" podID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" exitCode=0 Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.339997 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerDied","Data":"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461"} Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.340030 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0350a5c4-7eb7-42bb-a72e-28b120f08f7a","Type":"ContainerDied","Data":"a21906d0080002b80059b53f2425557bdbad30ceb5fad9442aae6757d6585801"} Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.340051 4949 scope.go:117] "RemoveContainer" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.340187 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.347255 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data" (OuterVolumeSpecName: "config-data") pod "0350a5c4-7eb7-42bb-a72e-28b120f08f7a" (UID: "0350a5c4-7eb7-42bb-a72e-28b120f08f7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.361659 4949 scope.go:117] "RemoveContainer" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.377874 4949 scope.go:117] "RemoveContainer" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.378212 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603\": container with ID starting with 17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603 not found: ID does not exist" containerID="17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.378243 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603"} err="failed to get container status \"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603\": rpc error: code = NotFound desc = could not find container \"17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603\": container with ID starting with 17af1739fb5c11c595c768f90595b295e0f9661f7421dec3090ccb2d9bfd9603 not found: ID does not exist" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.378265 4949 scope.go:117] "RemoveContainer" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" Jan 20 15:08:01 crc kubenswrapper[4949]: E0120 15:08:01.378611 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461\": container with ID starting with ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461 not found: ID does not exist" containerID="ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.378636 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461"} err="failed to get container status \"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461\": rpc error: code = NotFound desc = could not find container \"ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461\": container with ID starting with ec37766221e7469a66f238576b86edb684cbe1abaab99db931683cbe10818461 not found: ID does not exist" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.413990 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722jr\" (UniqueName: \"kubernetes.io/projected/0b4f97ab-7425-4271-bd09-0e89073ebdc1-kube-api-access-722jr\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414064 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414264 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.414321 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0350a5c4-7eb7-42bb-a72e-28b120f08f7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.415213 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.419150 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.419598 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4f97ab-7425-4271-bd09-0e89073ebdc1-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.433950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722jr\" (UniqueName: \"kubernetes.io/projected/0b4f97ab-7425-4271-bd09-0e89073ebdc1-kube-api-access-722jr\") pod \"openstackclient\" (UID: \"0b4f97ab-7425-4271-bd09-0e89073ebdc1\") " pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.639355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.699234 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.711548 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.730595 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.732306 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.736605 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.761353 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821386 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821583 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821672 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmcn\" (UniqueName: \"kubernetes.io/projected/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-kube-api-access-7vmcn\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.821698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923786 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmcn\" (UniqueName: \"kubernetes.io/projected/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-kube-api-access-7vmcn\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923841 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923910 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.923957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.924015 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.924030 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.924122 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.929332 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.932569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.933034 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.935320 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:01 crc kubenswrapper[4949]: I0120 15:08:01.939324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmcn\" (UniqueName: \"kubernetes.io/projected/ef233e09-2d4d-4f12-9adf-e1bab1dcd101-kube-api-access-7vmcn\") pod \"cinder-scheduler-0\" (UID: \"ef233e09-2d4d-4f12-9adf-e1bab1dcd101\") " pod="openstack/cinder-scheduler-0" Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.085441 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.132955 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.371984 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0b4f97ab-7425-4271-bd09-0e89073ebdc1","Type":"ContainerStarted","Data":"c7a72ff9f80b65b7d772678f9f48eba0e7b3c0592c04657cd483d698352d1657"} Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.675413 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 20 15:08:02 crc kubenswrapper[4949]: I0120 15:08:02.799430 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0350a5c4-7eb7-42bb-a72e-28b120f08f7a" path="/var/lib/kubelet/pods/0350a5c4-7eb7-42bb-a72e-28b120f08f7a/volumes" Jan 20 15:08:03 crc kubenswrapper[4949]: I0120 15:08:03.383220 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef233e09-2d4d-4f12-9adf-e1bab1dcd101","Type":"ContainerStarted","Data":"fee8ebe3b90a305ec63a24873f113d7350787b11e89a550377d36d1586d948fa"} Jan 20 15:08:03 crc kubenswrapper[4949]: I0120 15:08:03.383734 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef233e09-2d4d-4f12-9adf-e1bab1dcd101","Type":"ContainerStarted","Data":"861cf8d0f131b25fa20cc6f6dfeaa6a3808c767ca3d020fcd2dcb3d5149934a0"} Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.074184 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.402956 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef233e09-2d4d-4f12-9adf-e1bab1dcd101","Type":"ContainerStarted","Data":"a3cccdfcec5171fe6a3030f71ccf8473a8b09577cf68f0985139d036ee099f7b"} Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.814332 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.814314139 podStartE2EDuration="3.814314139s" podCreationTimestamp="2026-01-20 15:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:04.42250658 +0000 UTC m=+1080.232337468" watchObservedRunningTime="2026-01-20 15:08:04.814314139 +0000 UTC m=+1080.624145017" Jan 20 15:08:04 crc kubenswrapper[4949]: I0120 15:08:04.831661 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:08:07 crc kubenswrapper[4949]: I0120 15:08:07.085937 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 20 15:08:11 crc kubenswrapper[4949]: I0120 15:08:11.471149 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0b4f97ab-7425-4271-bd09-0e89073ebdc1","Type":"ContainerStarted","Data":"50529ebaac9b1823a7ed7a1416655dbdb97246909831f4e3f913f412d2233b2e"} Jan 20 15:08:11 crc kubenswrapper[4949]: I0120 15:08:11.490357 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.7498311960000001 podStartE2EDuration="10.490339313s" podCreationTimestamp="2026-01-20 15:08:01 +0000 UTC" firstStartedPulling="2026-01-20 15:08:02.154737621 +0000 UTC m=+1077.964568479" lastFinishedPulling="2026-01-20 15:08:10.895245738 +0000 UTC m=+1086.705076596" observedRunningTime="2026-01-20 15:08:11.485722305 +0000 UTC m=+1087.295553173" watchObservedRunningTime="2026-01-20 15:08:11.490339313 +0000 UTC m=+1087.300170171" Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.194035 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.312286 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.692662 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693088 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" containerID="cri-o://38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" gracePeriod=30 Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693218 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" containerID="cri-o://8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" gracePeriod=30 Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693291 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" containerID="cri-o://55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" gracePeriod=30 Jan 20 15:08:12 crc kubenswrapper[4949]: I0120 15:08:12.693007 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" containerID="cri-o://bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" gracePeriod=30 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.470061 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498827 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" exitCode=0 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498871 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" exitCode=2 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498880 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" exitCode=0 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498888 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" exitCode=0 Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.498914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499134 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499158 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe5dc0c3-1563-4605-81e6-2ed8a343353b","Type":"ContainerDied","Data":"a62b9189f9b2d6a8459ab648624acabd8486ab8c6921df65e9fe55b79833bb74"} Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499187 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.499346 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.530798 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.566868 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.586047 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.611509 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.612049 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612080 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612105 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.612494 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612515 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612552 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.612921 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612952 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.612971 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.613270 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613289 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613303 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613582 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.613627 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614030 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614047 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614416 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614456 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614792 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.614845 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615193 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615212 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615460 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615477 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615801 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.615823 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616205 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616228 4949 scope.go:117] "RemoveContainer" containerID="8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616572 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25"} err="failed to get container status \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": rpc error: code = NotFound desc = could not find container \"8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25\": container with ID starting with 8f795340da2a5212a470f4f0e922f679bf3e6dae0fb1e4733b13d0773b4dfa25 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616597 4949 scope.go:117] "RemoveContainer" containerID="38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616919 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece"} err="failed to get container status \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": rpc error: code = NotFound desc = could not find container \"38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece\": container with ID starting with 38521654a43eb1a82c18ca127a5e9908722db264d82eb37cbfdabd2235611ece not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.616963 4949 scope.go:117] "RemoveContainer" containerID="55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.617256 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793"} err="failed to get container status \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": rpc error: code = NotFound desc = could not find container \"55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793\": container with ID starting with 55eb98bee320db0722c4c978ca8dc783d70381a70345c862d3db07c6db7fc793 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.617306 4949 scope.go:117] "RemoveContainer" containerID="bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.617700 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391"} err="failed to get container status \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": rpc error: code = NotFound desc = could not find container \"bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391\": container with ID starting with bb8a695c638e9a8751aaa5c8bf487b272220a7d36c14ad71221b552e03f2e391 not found: ID does not exist" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.640949 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641039 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641077 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641106 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641150 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641189 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.641252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") pod \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\" (UID: \"fe5dc0c3-1563-4605-81e6-2ed8a343353b\") " Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.642179 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.642259 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.647485 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts" (OuterVolumeSpecName: "scripts") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.647573 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd" (OuterVolumeSpecName: "kube-api-access-smchd") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "kube-api-access-smchd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.672725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.739633 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data" (OuterVolumeSpecName: "config-data") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743429 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743459 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smchd\" (UniqueName: \"kubernetes.io/projected/fe5dc0c3-1563-4605-81e6-2ed8a343353b-kube-api-access-smchd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743472 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743481 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743488 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe5dc0c3-1563-4605-81e6-2ed8a343353b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.743496 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.763277 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe5dc0c3-1563-4605-81e6-2ed8a343353b" (UID: "fe5dc0c3-1563-4605-81e6-2ed8a343353b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.845221 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe5dc0c3-1563-4605-81e6-2ed8a343353b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.863313 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.870775 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892126 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892581 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892600 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892621 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892629 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892648 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892658 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: E0120 15:08:13.892669 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892677 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892875 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-notification-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892898 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="sg-core" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892911 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="ceilometer-central-agent" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.892925 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" containerName="proxy-httpd" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.894849 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.897164 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.898341 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:08:13 crc kubenswrapper[4949]: I0120 15:08:13.912832 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.052594 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.052868 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.052982 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053173 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053281 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.053363 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.154963 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155062 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155087 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155150 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155211 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155269 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.155946 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.156459 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.159978 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.160094 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.161845 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.177739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.184039 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ceilometer-0\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.248300 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.696682 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.802136 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5dc0c3-1563-4605-81e6-2ed8a343353b" path="/var/lib/kubelet/pods/fe5dc0c3-1563-4605-81e6-2ed8a343353b/volumes" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.831354 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68cb9b7c44-mz9j4" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Jan 20 15:08:14 crc kubenswrapper[4949]: I0120 15:08:14.831458 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:08:15 crc kubenswrapper[4949]: I0120 15:08:15.541862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"001934a2249d5b368738c4a7af5d9dcb8380201f7480fa3d74bbff0f9ef72bdd"} Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.843369 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.845336 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.866961 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.969559 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.970840 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.973895 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.991834 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:08:16 crc kubenswrapper[4949]: I0120 15:08:16.992828 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.001037 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.009503 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.009607 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.038446 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.098227 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.099435 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.111505 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.112403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.112560 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.112600 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.113050 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.113149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.113184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.115840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.144328 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"nova-api-db-create-8sgnq\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.162455 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.163771 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.165660 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.192116 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214768 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214865 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.214976 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.215922 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.217814 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.229370 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"nova-cell0-db-create-xh75b\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.239138 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"nova-api-9780-account-create-update-7t5m4\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.257628 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316583 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316698 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316754 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.316785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.317502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.326781 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.347037 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"nova-cell1-db-create-p4ss7\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.365275 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.367759 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.376002 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.377059 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.380672 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.408958 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420138 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420921 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.420950 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.421045 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.421633 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.439484 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"nova-cell0-2b86-account-create-update-htsxk\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.508635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.522748 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.522877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.524250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.557953 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"nova-cell1-8ce0-account-create-update-zqqvh\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.578335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686"} Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.697926 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:17 crc kubenswrapper[4949]: I0120 15:08:17.996223 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.119896 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.276535 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.318266 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.345989 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.406646 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.588961 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerStarted","Data":"938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.589009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerStarted","Data":"a8d6149801e53f89ca794ad797be9f1d0a8ff3695b514f9e3586708af1cd01cd"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.592796 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" event={"ID":"3187f0f3-7689-4faf-92cc-8d869ef8ecd9","Type":"ContainerStarted","Data":"3497bf33a1bcabee5fa530614095f63c3dcb57443d8a56ada4e3a7106a39c3f7"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.595164 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerStarted","Data":"a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.595215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerStarted","Data":"90e33aef6f809221cbaccfc6477d221f54e7cc54a22cd175bdd0f4330a197491"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.600958 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4ss7" event={"ID":"91c4f23f-5c92-4f03-a457-6fe5ddc27eec","Type":"ContainerStarted","Data":"44b49987fac35537104bf2f291fdb77295a57b63139f228ac18ebe5eddbb8915"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.603421 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerStarted","Data":"ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.603466 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerStarted","Data":"a33d27a89803603063a0c458e3327c6608fd5212c1eccd7b120b767979ad42f3"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.606128 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-xh75b" podStartSLOduration=2.606110376 podStartE2EDuration="2.606110376s" podCreationTimestamp="2026-01-20 15:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:18.60434799 +0000 UTC m=+1094.414178868" watchObservedRunningTime="2026-01-20 15:08:18.606110376 +0000 UTC m=+1094.415941234" Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.616722 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.622877 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-8sgnq" podStartSLOduration=2.622856241 podStartE2EDuration="2.622856241s" podCreationTimestamp="2026-01-20 15:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:18.619212785 +0000 UTC m=+1094.429043643" watchObservedRunningTime="2026-01-20 15:08:18.622856241 +0000 UTC m=+1094.432687099" Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.625002 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" event={"ID":"956eb935-630a-49f6-8b3e-e5053edea66b","Type":"ContainerStarted","Data":"b4c15550174f2091a960142ecaa27eda30a96f73c6c135490e1950a61b6d1a4f"} Jan 20 15:08:18 crc kubenswrapper[4949]: I0120 15:08:18.641263 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9780-account-create-update-7t5m4" podStartSLOduration=2.64124564 podStartE2EDuration="2.64124564s" podCreationTimestamp="2026-01-20 15:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:18.635057672 +0000 UTC m=+1094.444888530" watchObservedRunningTime="2026-01-20 15:08:18.64124564 +0000 UTC m=+1094.451076498" Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.637539 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.639128 4949 generic.go:334] "Generic (PLEG): container finished" podID="956eb935-630a-49f6-8b3e-e5053edea66b" containerID="1234260b184752a89b6e70a1ae59d09a4b3f7d03f7fb974dc5afeaccba79232f" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.639186 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" event={"ID":"956eb935-630a-49f6-8b3e-e5053edea66b","Type":"ContainerDied","Data":"1234260b184752a89b6e70a1ae59d09a4b3f7d03f7fb974dc5afeaccba79232f"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.641011 4949 generic.go:334] "Generic (PLEG): container finished" podID="6572b1b9-85e4-4ede-879f-754c173433d1" containerID="938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.641044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerDied","Data":"938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.642601 4949 generic.go:334] "Generic (PLEG): container finished" podID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerID="135008a156949889d1049508e72bc07f9183b62985200f63db1952335429a011" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.642651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" event={"ID":"3187f0f3-7689-4faf-92cc-8d869ef8ecd9","Type":"ContainerDied","Data":"135008a156949889d1049508e72bc07f9183b62985200f63db1952335429a011"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.644302 4949 generic.go:334] "Generic (PLEG): container finished" podID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerID="a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.644358 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerDied","Data":"a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.646146 4949 generic.go:334] "Generic (PLEG): container finished" podID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerID="24dbf49c8beca72a4d37ee3920737a645e4fe60fe68139ee7aef223996ccfdb6" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.646230 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4ss7" event={"ID":"91c4f23f-5c92-4f03-a457-6fe5ddc27eec","Type":"ContainerDied","Data":"24dbf49c8beca72a4d37ee3920737a645e4fe60fe68139ee7aef223996ccfdb6"} Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.648236 4949 generic.go:334] "Generic (PLEG): container finished" podID="170f8463-ece8-42b9-944f-b4adcc22e897" containerID="ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d" exitCode=0 Jan 20 15:08:19 crc kubenswrapper[4949]: I0120 15:08:19.648280 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerDied","Data":"ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d"} Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.622839 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-conmon-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-conmon-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.623226 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9468e8c_1c76_4f4f_a3da_1cbc82ea418c.slice/crio-a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.623251 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-conmon-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-conmon-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: W0120 15:08:20.623290 4949 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f8463_ece8_42b9_944f_b4adcc22e897.slice/crio-ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d.scope: no such file or directory Jan 20 15:08:20 crc kubenswrapper[4949]: I0120 15:08:20.658742 4949 generic.go:334] "Generic (PLEG): container finished" podID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerID="89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc" exitCode=137 Jan 20 15:08:20 crc kubenswrapper[4949]: I0120 15:08:20.658969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerDied","Data":"89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc"} Jan 20 15:08:20 crc kubenswrapper[4949]: E0120 15:08:20.836776 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod706427a3_6d1f_4a5e_9b50_d84499daec46.slice/crio-89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.066363 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105082 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105127 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105172 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105191 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105233 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.105306 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") pod \"706427a3-6d1f-4a5e-9b50-d84499daec46\" (UID: \"706427a3-6d1f-4a5e-9b50-d84499daec46\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.123248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs" (OuterVolumeSpecName: "logs") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.123553 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm" (OuterVolumeSpecName: "kube-api-access-58hhm") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "kube-api-access-58hhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.169774 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.217112 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58hhm\" (UniqueName: \"kubernetes.io/projected/706427a3-6d1f-4a5e-9b50-d84499daec46-kube-api-access-58hhm\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.217135 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/706427a3-6d1f-4a5e-9b50-d84499daec46-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.217144 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.250047 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts" (OuterVolumeSpecName: "scripts") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.277317 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.305615 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data" (OuterVolumeSpecName: "config-data") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.336009 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.336033 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.336044 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/706427a3-6d1f-4a5e-9b50-d84499daec46-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.363151 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "706427a3-6d1f-4a5e-9b50-d84499daec46" (UID: "706427a3-6d1f-4a5e-9b50-d84499daec46"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.393710 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.404047 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.418934 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.429553 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.441306 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/706427a3-6d1f-4a5e-9b50-d84499daec46-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.443549 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.466638 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542835 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") pod \"956eb935-630a-49f6-8b3e-e5053edea66b\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542888 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") pod \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542935 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") pod \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\" (UID: \"91c4f23f-5c92-4f03-a457-6fe5ddc27eec\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.542968 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") pod \"956eb935-630a-49f6-8b3e-e5053edea66b\" (UID: \"956eb935-630a-49f6-8b3e-e5053edea66b\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") pod \"6572b1b9-85e4-4ede-879f-754c173433d1\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543058 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") pod \"6572b1b9-85e4-4ede-879f-754c173433d1\" (UID: \"6572b1b9-85e4-4ede-879f-754c173433d1\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543090 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") pod \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543137 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") pod \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\" (UID: \"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.543720 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "956eb935-630a-49f6-8b3e-e5053edea66b" (UID: "956eb935-630a-49f6-8b3e-e5053edea66b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.544144 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6572b1b9-85e4-4ede-879f-754c173433d1" (UID: "6572b1b9-85e4-4ede-879f-754c173433d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.545062 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" (UID: "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.546761 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc" (OuterVolumeSpecName: "kube-api-access-qp9nc") pod "956eb935-630a-49f6-8b3e-e5053edea66b" (UID: "956eb935-630a-49f6-8b3e-e5053edea66b"). InnerVolumeSpecName "kube-api-access-qp9nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.547200 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz" (OuterVolumeSpecName: "kube-api-access-hcxcz") pod "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" (UID: "c9468e8c-1c76-4f4f-a3da-1cbc82ea418c"). InnerVolumeSpecName "kube-api-access-hcxcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.547306 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91c4f23f-5c92-4f03-a457-6fe5ddc27eec" (UID: "91c4f23f-5c92-4f03-a457-6fe5ddc27eec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.547494 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl" (OuterVolumeSpecName: "kube-api-access-vbckl") pod "6572b1b9-85e4-4ede-879f-754c173433d1" (UID: "6572b1b9-85e4-4ede-879f-754c173433d1"). InnerVolumeSpecName "kube-api-access-vbckl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.548771 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz" (OuterVolumeSpecName: "kube-api-access-zpksz") pod "91c4f23f-5c92-4f03-a457-6fe5ddc27eec" (UID: "91c4f23f-5c92-4f03-a457-6fe5ddc27eec"). InnerVolumeSpecName "kube-api-access-zpksz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") pod \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645244 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") pod \"170f8463-ece8-42b9-944f-b4adcc22e897\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645295 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") pod \"170f8463-ece8-42b9-944f-b4adcc22e897\" (UID: \"170f8463-ece8-42b9-944f-b4adcc22e897\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") pod \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\" (UID: \"3187f0f3-7689-4faf-92cc-8d869ef8ecd9\") " Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645876 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "170f8463-ece8-42b9-944f-b4adcc22e897" (UID: "170f8463-ece8-42b9-944f-b4adcc22e897"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.645902 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3187f0f3-7689-4faf-92cc-8d869ef8ecd9" (UID: "3187f0f3-7689-4faf-92cc-8d869ef8ecd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646293 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646324 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp9nc\" (UniqueName: \"kubernetes.io/projected/956eb935-630a-49f6-8b3e-e5053edea66b-kube-api-access-qp9nc\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646342 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpksz\" (UniqueName: \"kubernetes.io/projected/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-kube-api-access-zpksz\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646559 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c4f23f-5c92-4f03-a457-6fe5ddc27eec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646573 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956eb935-630a-49f6-8b3e-e5053edea66b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646585 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6572b1b9-85e4-4ede-879f-754c173433d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646600 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbckl\" (UniqueName: \"kubernetes.io/projected/6572b1b9-85e4-4ede-879f-754c173433d1-kube-api-access-vbckl\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646612 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170f8463-ece8-42b9-944f-b4adcc22e897-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646623 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.646635 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcxcz\" (UniqueName: \"kubernetes.io/projected/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c-kube-api-access-hcxcz\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.648903 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8" (OuterVolumeSpecName: "kube-api-access-ww9g8") pod "3187f0f3-7689-4faf-92cc-8d869ef8ecd9" (UID: "3187f0f3-7689-4faf-92cc-8d869ef8ecd9"). InnerVolumeSpecName "kube-api-access-ww9g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.658041 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj" (OuterVolumeSpecName: "kube-api-access-5g4dj") pod "170f8463-ece8-42b9-944f-b4adcc22e897" (UID: "170f8463-ece8-42b9-944f-b4adcc22e897"). InnerVolumeSpecName "kube-api-access-5g4dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.671161 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p4ss7" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.671160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p4ss7" event={"ID":"91c4f23f-5c92-4f03-a457-6fe5ddc27eec","Type":"ContainerDied","Data":"44b49987fac35537104bf2f291fdb77295a57b63139f228ac18ebe5eddbb8915"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.671510 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b49987fac35537104bf2f291fdb77295a57b63139f228ac18ebe5eddbb8915" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.672913 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8sgnq" event={"ID":"170f8463-ece8-42b9-944f-b4adcc22e897","Type":"ContainerDied","Data":"a33d27a89803603063a0c458e3327c6608fd5212c1eccd7b120b767979ad42f3"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.672991 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a33d27a89803603063a0c458e3327c6608fd5212c1eccd7b120b767979ad42f3" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.673085 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8sgnq" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693170 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" containerID="cri-o://ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693282 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" containerID="cri-o://74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693335 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" containerID="cri-o://094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693371 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" containerID="cri-o://96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c" gracePeriod=30 Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerStarted","Data":"74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.693715 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.698013 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" event={"ID":"956eb935-630a-49f6-8b3e-e5053edea66b","Type":"ContainerDied","Data":"b4c15550174f2091a960142ecaa27eda30a96f73c6c135490e1950a61b6d1a4f"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.698057 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4c15550174f2091a960142ecaa27eda30a96f73c6c135490e1950a61b6d1a4f" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.698135 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2b86-account-create-update-htsxk" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.701764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68cb9b7c44-mz9j4" event={"ID":"706427a3-6d1f-4a5e-9b50-d84499daec46","Type":"ContainerDied","Data":"f125f112410915dcc64d07b1cc57eaefc28f49584f23fb6d7c746e16fc54237b"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.701849 4949 scope.go:117] "RemoveContainer" containerID="03671a5dcb7b909f8e17b23b750473e2d5019a0974d351ec384437e042ce6d74" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.701986 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68cb9b7c44-mz9j4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.711048 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xh75b" event={"ID":"6572b1b9-85e4-4ede-879f-754c173433d1","Type":"ContainerDied","Data":"a8d6149801e53f89ca794ad797be9f1d0a8ff3695b514f9e3586708af1cd01cd"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.711085 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d6149801e53f89ca794ad797be9f1d0a8ff3695b514f9e3586708af1cd01cd" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.711152 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xh75b" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.718072 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.774025257 podStartE2EDuration="8.71805498s" podCreationTimestamp="2026-01-20 15:08:13 +0000 UTC" firstStartedPulling="2026-01-20 15:08:14.727119765 +0000 UTC m=+1090.536950623" lastFinishedPulling="2026-01-20 15:08:20.671149488 +0000 UTC m=+1096.480980346" observedRunningTime="2026-01-20 15:08:21.716714687 +0000 UTC m=+1097.526545555" watchObservedRunningTime="2026-01-20 15:08:21.71805498 +0000 UTC m=+1097.527885838" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.721130 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" event={"ID":"3187f0f3-7689-4faf-92cc-8d869ef8ecd9","Type":"ContainerDied","Data":"3497bf33a1bcabee5fa530614095f63c3dcb57443d8a56ada4e3a7106a39c3f7"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.721175 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3497bf33a1bcabee5fa530614095f63c3dcb57443d8a56ada4e3a7106a39c3f7" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.721258 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ce0-account-create-update-zqqvh" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.725000 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9780-account-create-update-7t5m4" event={"ID":"c9468e8c-1c76-4f4f-a3da-1cbc82ea418c","Type":"ContainerDied","Data":"90e33aef6f809221cbaccfc6477d221f54e7cc54a22cd175bdd0f4330a197491"} Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.725040 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e33aef6f809221cbaccfc6477d221f54e7cc54a22cd175bdd0f4330a197491" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.725112 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9780-account-create-update-7t5m4" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.748153 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g4dj\" (UniqueName: \"kubernetes.io/projected/170f8463-ece8-42b9-944f-b4adcc22e897-kube-api-access-5g4dj\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.748184 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww9g8\" (UniqueName: \"kubernetes.io/projected/3187f0f3-7689-4faf-92cc-8d869ef8ecd9-kube-api-access-ww9g8\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.834126 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.844725 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68cb9b7c44-mz9j4"] Jan 20 15:08:21 crc kubenswrapper[4949]: I0120 15:08:21.939708 4949 scope.go:117] "RemoveContainer" containerID="89466d9dec6f8cb248fd94100bc681481b754e216659cf8ad9662f1f3a00cabc" Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733463 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7" exitCode=0 Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733798 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c" exitCode=2 Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7"} Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733841 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c"} Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c"} Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.733810 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c" exitCode=0 Jan 20 15:08:22 crc kubenswrapper[4949]: I0120 15:08:22.797233 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" path="/var/lib/kubelet/pods/706427a3-6d1f-4a5e-9b50-d84499daec46/volumes" Jan 20 15:08:24 crc kubenswrapper[4949]: I0120 15:08:24.756696 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerID="ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686" exitCode=0 Jan 20 15:08:24 crc kubenswrapper[4949]: I0120 15:08:24.756776 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686"} Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.122383 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.298797 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.298910 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.298958 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.299010 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.299185 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.300196 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.300225 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") pod \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\" (UID: \"ea823a04-f7e4-48d6-a4b3-19ad8779178d\") " Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.300820 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.301008 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.308127 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts" (OuterVolumeSpecName: "scripts") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.321756 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5" (OuterVolumeSpecName: "kube-api-access-hfmm5") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "kube-api-access-hfmm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.336700 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403227 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfmm5\" (UniqueName: \"kubernetes.io/projected/ea823a04-f7e4-48d6-a4b3-19ad8779178d-kube-api-access-hfmm5\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403295 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403314 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403334 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.403352 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea823a04-f7e4-48d6-a4b3-19ad8779178d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.459817 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.472376 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data" (OuterVolumeSpecName: "config-data") pod "ea823a04-f7e4-48d6-a4b3-19ad8779178d" (UID: "ea823a04-f7e4-48d6-a4b3-19ad8779178d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.504641 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.504678 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea823a04-f7e4-48d6-a4b3-19ad8779178d-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.777459 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea823a04-f7e4-48d6-a4b3-19ad8779178d","Type":"ContainerDied","Data":"001934a2249d5b368738c4a7af5d9dcb8380201f7480fa3d74bbff0f9ef72bdd"} Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.777875 4949 scope.go:117] "RemoveContainer" containerID="74308d27bdab06616794fc1c29023c529089a6c2ca62b810b0115c68ecb555a7" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.777570 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.806033 4949 scope.go:117] "RemoveContainer" containerID="094f197b9fe482ddb78fed78943b5f892b4e4b417957f128e80dddd9686e3b3c" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.816699 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.831632 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.838030 4949 scope.go:117] "RemoveContainer" containerID="96815dec8008581336abf696eb2dc364503e5c3790470e733d8ccd311641b80c" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850324 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850742 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850766 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850785 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850795 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850808 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850817 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850830 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850838 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850856 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850865 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850876 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850884 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850897 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850905 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850931 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850940 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850977 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.850986 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.850997 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851006 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.851019 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851027 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: E0120 15:08:25.851038 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851045 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851231 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="proxy-httpd" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851251 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851266 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-notification-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851278 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851291 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851331 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="706427a3-6d1f-4a5e-9b50-d84499daec46" containerName="horizon-log" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851345 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851356 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" containerName="mariadb-account-create-update" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851367 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851378 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="sg-core" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851388 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" containerName="ceilometer-central-agent" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.851399 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" containerName="mariadb-database-create" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.854372 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.862349 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.862735 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.868549 4949 scope.go:117] "RemoveContainer" containerID="ba93dbaa6f6cc8b4dfb99b5113f1fcec66e1a85717e56783d276a15269131686" Jan 20 15:08:25 crc kubenswrapper[4949]: I0120 15:08:25.868834 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012537 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012630 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012658 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012763 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012895 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.012983 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114098 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114132 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114212 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114250 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114298 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114313 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.114732 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.116316 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.120278 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.120451 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.122677 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.128485 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.142420 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"ceilometer-0\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.177250 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.632573 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.786452 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"361a157554a6c6e8ca30573fa06fb2376290235760dc3bb233c617496cbf7fb2"} Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.798440 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea823a04-f7e4-48d6-a4b3-19ad8779178d" path="/var/lib/kubelet/pods/ea823a04-f7e4-48d6-a4b3-19ad8779178d/volumes" Jan 20 15:08:26 crc kubenswrapper[4949]: I0120 15:08:26.930423 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.152025 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.152406 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.473000 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.474293 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.475966 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hc7rn" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.476233 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.482058 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.486384 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.550827 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.550889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.550970 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.551002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652494 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652637 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.652750 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.656597 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.657127 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.665163 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.669042 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"nova-cell0-conductor-db-sync-845d4\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.790237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:27 crc kubenswrapper[4949]: I0120 15:08:27.805935 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93"} Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.265719 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:08:28 crc kubenswrapper[4949]: W0120 15:08:28.265742 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d68b174_da83_41e7_804c_68a858beedf7.slice/crio-1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b WatchSource:0}: Error finding container 1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b: Status 404 returned error can't find the container with id 1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.816499 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9"} Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.817139 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d"} Jan 20 15:08:28 crc kubenswrapper[4949]: I0120 15:08:28.818050 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerStarted","Data":"1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b"} Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.835937 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerStarted","Data":"67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a"} Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836881 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836224 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" containerID="cri-o://613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836092 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" containerID="cri-o://9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836269 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" containerID="cri-o://67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.836285 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" containerID="cri-o://44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d" gracePeriod=30 Jan 20 15:08:30 crc kubenswrapper[4949]: I0120 15:08:30.862502 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.415913428 podStartE2EDuration="5.862482373s" podCreationTimestamp="2026-01-20 15:08:25 +0000 UTC" firstStartedPulling="2026-01-20 15:08:26.640781761 +0000 UTC m=+1102.450612619" lastFinishedPulling="2026-01-20 15:08:30.087350706 +0000 UTC m=+1105.897181564" observedRunningTime="2026-01-20 15:08:30.86050121 +0000 UTC m=+1106.670332058" watchObservedRunningTime="2026-01-20 15:08:30.862482373 +0000 UTC m=+1106.672313231" Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845264 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a" exitCode=0 Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845638 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9" exitCode=2 Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845647 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d" exitCode=0 Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845604 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a"} Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845674 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9"} Jan 20 15:08:31 crc kubenswrapper[4949]: I0120 15:08:31.845684 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d"} Jan 20 15:08:32 crc kubenswrapper[4949]: I0120 15:08:32.856112 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerID="9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93" exitCode=0 Jan 20 15:08:32 crc kubenswrapper[4949]: I0120 15:08:32.856185 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93"} Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.623823 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637337 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637429 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637481 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637734 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637887 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.637926 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") pod \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\" (UID: \"8b066491-fdc4-49f7-8c15-9a9dd53d4e48\") " Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638300 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638724 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.638820 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.644874 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts" (OuterVolumeSpecName: "scripts") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.651704 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945" (OuterVolumeSpecName: "kube-api-access-mn945") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "kube-api-access-mn945". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.678382 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.739920 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn945\" (UniqueName: \"kubernetes.io/projected/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-kube-api-access-mn945\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.740185 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.740194 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.740202 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.766342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.778403 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data" (OuterVolumeSpecName: "config-data") pod "8b066491-fdc4-49f7-8c15-9a9dd53d4e48" (UID: "8b066491-fdc4-49f7-8c15-9a9dd53d4e48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.841675 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.841972 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b066491-fdc4-49f7-8c15-9a9dd53d4e48-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.897020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerStarted","Data":"3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c"} Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.900475 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b066491-fdc4-49f7-8c15-9a9dd53d4e48","Type":"ContainerDied","Data":"361a157554a6c6e8ca30573fa06fb2376290235760dc3bb233c617496cbf7fb2"} Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.900539 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.900558 4949 scope.go:117] "RemoveContainer" containerID="67c0c8288753d865525941e82c6f7d6898fbff037946fb4aa78784a21a123d7a" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.923798 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-845d4" podStartSLOduration=1.746305046 podStartE2EDuration="10.923778626s" podCreationTimestamp="2026-01-20 15:08:27 +0000 UTC" firstStartedPulling="2026-01-20 15:08:28.267638142 +0000 UTC m=+1104.077469000" lastFinishedPulling="2026-01-20 15:08:37.445111672 +0000 UTC m=+1113.254942580" observedRunningTime="2026-01-20 15:08:37.912559948 +0000 UTC m=+1113.722390836" watchObservedRunningTime="2026-01-20 15:08:37.923778626 +0000 UTC m=+1113.733609494" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.924599 4949 scope.go:117] "RemoveContainer" containerID="613f6cef019dd7a719a13730afac0cf4233681e8c3fe7855ec8197c1855dcfe9" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.941379 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.952673 4949 scope.go:117] "RemoveContainer" containerID="44679265d9dcf3cb819c19e24e3b29970b6f7e0dabba04fdedb33f677d94c45d" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.957634 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.963979 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.964509 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.964613 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.964720 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.964802 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.964890 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.964966 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: E0120 15:08:37.965053 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965140 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965701 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="sg-core" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965829 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-notification-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.965929 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="proxy-httpd" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.966013 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" containerName="ceilometer-central-agent" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.970007 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.972568 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.974266 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.978695 4949 scope.go:117] "RemoveContainer" containerID="9f40776095674ae0723e55828aea96a10c12d01eb79a32e3b48c60c20602ee93" Jan 20 15:08:37 crc kubenswrapper[4949]: I0120 15:08:37.978711 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.044771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.044824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045068 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045111 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045286 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045394 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.045510 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146153 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146390 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146604 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146684 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.146922 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147028 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.147623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.150454 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.150467 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.150557 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.151809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.170049 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"ceilometer-0\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.292821 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.802844 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b066491-fdc4-49f7-8c15-9a9dd53d4e48" path="/var/lib/kubelet/pods/8b066491-fdc4-49f7-8c15-9a9dd53d4e48/volumes" Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.806299 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:08:38 crc kubenswrapper[4949]: W0120 15:08:38.808407 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaded75a0_687f_4b2c_a437_d170b095dfa1.slice/crio-4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf WatchSource:0}: Error finding container 4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf: Status 404 returned error can't find the container with id 4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf Jan 20 15:08:38 crc kubenswrapper[4949]: I0120 15:08:38.917152 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf"} Jan 20 15:08:40 crc kubenswrapper[4949]: I0120 15:08:40.956676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2"} Jan 20 15:08:41 crc kubenswrapper[4949]: I0120 15:08:41.965932 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd"} Jan 20 15:08:41 crc kubenswrapper[4949]: I0120 15:08:41.966471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6"} Jan 20 15:08:43 crc kubenswrapper[4949]: I0120 15:08:43.989261 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerStarted","Data":"c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc"} Jan 20 15:08:43 crc kubenswrapper[4949]: I0120 15:08:43.989887 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:08:44 crc kubenswrapper[4949]: I0120 15:08:44.012864 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.06238062 podStartE2EDuration="7.012842975s" podCreationTimestamp="2026-01-20 15:08:37 +0000 UTC" firstStartedPulling="2026-01-20 15:08:38.811269304 +0000 UTC m=+1114.621100162" lastFinishedPulling="2026-01-20 15:08:42.761731659 +0000 UTC m=+1118.571562517" observedRunningTime="2026-01-20 15:08:44.006712149 +0000 UTC m=+1119.816543007" watchObservedRunningTime="2026-01-20 15:08:44.012842975 +0000 UTC m=+1119.822673843" Jan 20 15:08:48 crc kubenswrapper[4949]: I0120 15:08:48.023753 4949 generic.go:334] "Generic (PLEG): container finished" podID="5d68b174-da83-41e7-804c-68a858beedf7" containerID="3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c" exitCode=0 Jan 20 15:08:48 crc kubenswrapper[4949]: I0120 15:08:48.023840 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerDied","Data":"3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c"} Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.429453 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.558929 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.559007 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.559066 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.559123 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") pod \"5d68b174-da83-41e7-804c-68a858beedf7\" (UID: \"5d68b174-da83-41e7-804c-68a858beedf7\") " Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.564286 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl" (OuterVolumeSpecName: "kube-api-access-xrrnl") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "kube-api-access-xrrnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.566084 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts" (OuterVolumeSpecName: "scripts") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.586811 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data" (OuterVolumeSpecName: "config-data") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.609705 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d68b174-da83-41e7-804c-68a858beedf7" (UID: "5d68b174-da83-41e7-804c-68a858beedf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661293 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrrnl\" (UniqueName: \"kubernetes.io/projected/5d68b174-da83-41e7-804c-68a858beedf7-kube-api-access-xrrnl\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661331 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661341 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:49 crc kubenswrapper[4949]: I0120 15:08:49.661351 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d68b174-da83-41e7-804c-68a858beedf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.048146 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-845d4" event={"ID":"5d68b174-da83-41e7-804c-68a858beedf7","Type":"ContainerDied","Data":"1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b"} Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.048188 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e713f8cfffd5b3905f4ebe1481d3991b1461be03690bdd66e4e59cfbdd1b97b" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.048247 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-845d4" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.193274 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 15:08:50 crc kubenswrapper[4949]: E0120 15:08:50.193820 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d68b174-da83-41e7-804c-68a858beedf7" containerName="nova-cell0-conductor-db-sync" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.193846 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d68b174-da83-41e7-804c-68a858beedf7" containerName="nova-cell0-conductor-db-sync" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.194088 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d68b174-da83-41e7-804c-68a858beedf7" containerName="nova-cell0-conductor-db-sync" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.194877 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.198170 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.198506 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hc7rn" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.200477 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.272775 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.272819 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.272890 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkzp\" (UniqueName: \"kubernetes.io/projected/432760ec-2ef6-4335-a7ba-21a2d73ede73-kube-api-access-bkkzp\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.374701 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.374906 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkkzp\" (UniqueName: \"kubernetes.io/projected/432760ec-2ef6-4335-a7ba-21a2d73ede73-kube-api-access-bkkzp\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.375078 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.381198 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.392192 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432760ec-2ef6-4335-a7ba-21a2d73ede73-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.397616 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkkzp\" (UniqueName: \"kubernetes.io/projected/432760ec-2ef6-4335-a7ba-21a2d73ede73-kube-api-access-bkkzp\") pod \"nova-cell0-conductor-0\" (UID: \"432760ec-2ef6-4335-a7ba-21a2d73ede73\") " pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:50 crc kubenswrapper[4949]: I0120 15:08:50.532287 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:51 crc kubenswrapper[4949]: I0120 15:08:50.998300 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 20 15:08:51 crc kubenswrapper[4949]: I0120 15:08:51.060263 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"432760ec-2ef6-4335-a7ba-21a2d73ede73","Type":"ContainerStarted","Data":"fb0cd78d6cdc88bb0f51b416b6d026b8d1a38e3560c300297e80481570b83afe"} Jan 20 15:08:52 crc kubenswrapper[4949]: I0120 15:08:52.069057 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"432760ec-2ef6-4335-a7ba-21a2d73ede73","Type":"ContainerStarted","Data":"659590ecd0960c2e3a29a1ea62dcc192e0e8d683680a4448d1a2f46b2fa0104c"} Jan 20 15:08:52 crc kubenswrapper[4949]: I0120 15:08:52.069449 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 20 15:08:52 crc kubenswrapper[4949]: I0120 15:08:52.096646 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.096630708 podStartE2EDuration="2.096630708s" podCreationTimestamp="2026-01-20 15:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:08:52.089443639 +0000 UTC m=+1127.899274497" watchObservedRunningTime="2026-01-20 15:08:52.096630708 +0000 UTC m=+1127.906461566" Jan 20 15:08:57 crc kubenswrapper[4949]: I0120 15:08:57.152184 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:08:57 crc kubenswrapper[4949]: I0120 15:08:57.152477 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:09:00 crc kubenswrapper[4949]: I0120 15:09:00.578295 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.143273 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.144402 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.167545 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.182545 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.192264 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.278280 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.280809 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.286154 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.294106 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348587 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348624 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348677 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.348715 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.378237 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.385219 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.392847 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.393949 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.434335 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.435780 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.439888 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450736 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450813 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450881 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450895 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.450915 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.457860 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.463090 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.468295 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.472852 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.519115 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"nova-cell0-cell-mapping-4pxxs\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.531573 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.532672 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.538860 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.551422 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560123 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560168 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560269 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560286 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560311 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560341 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560396 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560413 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560428 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560450 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.560468 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.561205 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.570044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.572674 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.580506 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.582744 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.600875 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"nova-api-0\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.613227 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.617116 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661704 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661760 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661810 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661828 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661845 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661863 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661879 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661897 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661933 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661971 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.661984 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.662018 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.662037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.663429 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.666794 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.669809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.672349 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.672621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.674136 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.680572 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"nova-scheduler-0\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.681850 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"nova-cell1-novncproxy-0\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.686432 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.688309 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"nova-metadata-0\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.706964 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763154 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763388 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763433 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763460 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.763504 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.764352 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.765082 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.771202 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.779941 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.787831 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"dnsmasq-dns-8b8cf6657-vr8t6\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.805797 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.940801 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.963925 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:01 crc kubenswrapper[4949]: I0120 15:09:01.984821 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.136386 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.372539 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.374009 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2edecf7f_fbdf_4907_ba28_33f70a58f37a.slice/crio-dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6 WatchSource:0}: Error finding container dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6: Status 404 returned error can't find the container with id dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6 Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.516106 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5364ff4f_3ee5_4577_b82c_0c094bd55125.slice/crio-bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8 WatchSource:0}: Error finding container bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8: Status 404 returned error can't find the container with id bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8 Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.519027 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.527395 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.529545 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.536408 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.536569 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.546946 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.622108 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.625103 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2db0feee_11b2_4926_a0c9_2b3f39743fa3.slice/crio-5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764 WatchSource:0}: Error finding container 5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764: Status 404 returned error can't find the container with id 5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764 Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.696693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.696871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.697078 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.697261 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.699273 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.714553 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd3dc9fa_0768_4d5d_bbe8_812388ebabf7.slice/crio-ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b WatchSource:0}: Error finding container ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b: Status 404 returned error can't find the container with id ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.798915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.798991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.799011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.799085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.804482 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.804689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.804687 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.806726 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:02 crc kubenswrapper[4949]: W0120 15:09:02.809626 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae2f6e22_4c5a_4d30_95a8_0cacc9f21791.slice/crio-23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c WatchSource:0}: Error finding container 23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c: Status 404 returned error can't find the container with id 23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.824364 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"nova-cell1-conductor-db-sync-n8g8k\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:02 crc kubenswrapper[4949]: I0120 15:09:02.921813 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.214951 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerStarted","Data":"dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.216238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerStarted","Data":"cf270cf5a5820c777ad79aaecd34efdf73ed36872e34df88df821f17776a6fb7"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.217102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerStarted","Data":"ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.217999 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerStarted","Data":"5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.219562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerStarted","Data":"ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.219593 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerStarted","Data":"bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.222958 4949 generic.go:334] "Generic (PLEG): container finished" podID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerID="b9e7253362065575b97f2ce8215072002f755dd1b51aa51ada8298fea676a78f" exitCode=0 Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.223044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerDied","Data":"b9e7253362065575b97f2ce8215072002f755dd1b51aa51ada8298fea676a78f"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.223067 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerStarted","Data":"23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c"} Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.253787 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4pxxs" podStartSLOduration=2.253769482 podStartE2EDuration="2.253769482s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:03.241855051 +0000 UTC m=+1139.051685909" watchObservedRunningTime="2026-01-20 15:09:03.253769482 +0000 UTC m=+1139.063600340" Jan 20 15:09:03 crc kubenswrapper[4949]: I0120 15:09:03.378643 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.234128 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerStarted","Data":"044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7"} Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.234550 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.237650 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerStarted","Data":"8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a"} Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.237680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerStarted","Data":"c8b5ce4d167e29814242380f36417059a04a0bfe76bb946c5d3c88c545749a63"} Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.261731 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" podStartSLOduration=3.261655216 podStartE2EDuration="3.261655216s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:04.253843756 +0000 UTC m=+1140.063674614" watchObservedRunningTime="2026-01-20 15:09:04.261655216 +0000 UTC m=+1140.071486084" Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.278061 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" podStartSLOduration=2.278037539 podStartE2EDuration="2.278037539s" podCreationTimestamp="2026-01-20 15:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:04.269410633 +0000 UTC m=+1140.079241491" watchObservedRunningTime="2026-01-20 15:09:04.278037539 +0000 UTC m=+1140.087868397" Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.662713 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:04 crc kubenswrapper[4949]: I0120 15:09:04.674008 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.260743 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerStarted","Data":"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.262399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerStarted","Data":"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.262800 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" gracePeriod=30 Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.270692 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerStarted","Data":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.273777 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerStarted","Data":"d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32"} Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.290018 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.223171814 podStartE2EDuration="5.290000406s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.717848057 +0000 UTC m=+1138.527678915" lastFinishedPulling="2026-01-20 15:09:05.784676659 +0000 UTC m=+1141.594507507" observedRunningTime="2026-01-20 15:09:06.275496842 +0000 UTC m=+1142.085327700" watchObservedRunningTime="2026-01-20 15:09:06.290000406 +0000 UTC m=+1142.099831254" Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.297281 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.140347948 podStartE2EDuration="5.297241037s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.626944073 +0000 UTC m=+1138.436774931" lastFinishedPulling="2026-01-20 15:09:05.783837162 +0000 UTC m=+1141.593668020" observedRunningTime="2026-01-20 15:09:06.293225898 +0000 UTC m=+1142.103056756" watchObservedRunningTime="2026-01-20 15:09:06.297241037 +0000 UTC m=+1142.107071905" Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.942277 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 15:09:06 crc kubenswrapper[4949]: I0120 15:09:06.964972 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.283829 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerStarted","Data":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.283902 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" containerID="cri-o://7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" gracePeriod=30 Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.283942 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" containerID="cri-o://cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" gracePeriod=30 Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.287670 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerStarted","Data":"92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5"} Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.312603 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9090683 podStartE2EDuration="6.312583559s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.379073804 +0000 UTC m=+1138.188904662" lastFinishedPulling="2026-01-20 15:09:05.782589073 +0000 UTC m=+1141.592419921" observedRunningTime="2026-01-20 15:09:07.311976739 +0000 UTC m=+1143.121807617" watchObservedRunningTime="2026-01-20 15:09:07.312583559 +0000 UTC m=+1143.122414417" Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.344057 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.753123017 podStartE2EDuration="6.344038224s" podCreationTimestamp="2026-01-20 15:09:01 +0000 UTC" firstStartedPulling="2026-01-20 15:09:02.196543091 +0000 UTC m=+1138.006373939" lastFinishedPulling="2026-01-20 15:09:05.787458288 +0000 UTC m=+1141.597289146" observedRunningTime="2026-01-20 15:09:07.330734339 +0000 UTC m=+1143.140565227" watchObservedRunningTime="2026-01-20 15:09:07.344038224 +0000 UTC m=+1143.153869092" Jan 20 15:09:07 crc kubenswrapper[4949]: I0120 15:09:07.885419 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000354 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000711 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000874 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.000983 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") pod \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\" (UID: \"2edecf7f-fbdf-4907-ba28-33f70a58f37a\") " Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.003022 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs" (OuterVolumeSpecName: "logs") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.007353 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t" (OuterVolumeSpecName: "kube-api-access-k5n6t") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "kube-api-access-k5n6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.033016 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data" (OuterVolumeSpecName: "config-data") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.035733 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2edecf7f-fbdf-4907-ba28-33f70a58f37a" (UID: "2edecf7f-fbdf-4907-ba28-33f70a58f37a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102550 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102786 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5n6t\" (UniqueName: \"kubernetes.io/projected/2edecf7f-fbdf-4907-ba28-33f70a58f37a-kube-api-access-k5n6t\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102799 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edecf7f-fbdf-4907-ba28-33f70a58f37a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.102807 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2edecf7f-fbdf-4907-ba28-33f70a58f37a-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.300034 4949 generic.go:334] "Generic (PLEG): container finished" podID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" exitCode=0 Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.300064 4949 generic.go:334] "Generic (PLEG): container finished" podID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" exitCode=143 Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.300897 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305665 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerDied","Data":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerDied","Data":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305767 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2edecf7f-fbdf-4907-ba28-33f70a58f37a","Type":"ContainerDied","Data":"dfc0c9c10b3ce817ef098e4f68702c155bacde72daede980fb661d41e2e94fc6"} Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.305787 4949 scope.go:117] "RemoveContainer" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.313715 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.366687 4949 scope.go:117] "RemoveContainer" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.415901 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.435323 4949 scope.go:117] "RemoveContainer" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.435821 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": container with ID starting with cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086 not found: ID does not exist" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.435859 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} err="failed to get container status \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": rpc error: code = NotFound desc = could not find container \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": container with ID starting with cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086 not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.435884 4949 scope.go:117] "RemoveContainer" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.436072 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": container with ID starting with 7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f not found: ID does not exist" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.436095 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} err="failed to get container status \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": rpc error: code = NotFound desc = could not find container \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": container with ID starting with 7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.436110 4949 scope.go:117] "RemoveContainer" containerID="cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.441589 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.442957 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086"} err="failed to get container status \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": rpc error: code = NotFound desc = could not find container \"cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086\": container with ID starting with cdbd06bf02cfe0b2ba383f499ae2ed765041117976a8063f3ea092e62ac10086 not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.443008 4949 scope.go:117] "RemoveContainer" containerID="7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.447894 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f"} err="failed to get container status \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": rpc error: code = NotFound desc = could not find container \"7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f\": container with ID starting with 7b2bb095aa30034e9bd95cf3b8cbe669c1667036bdd38639292f0203352f328f not found: ID does not exist" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.451555 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.452003 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452020 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" Jan 20 15:09:08 crc kubenswrapper[4949]: E0120 15:09:08.452035 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452043 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452238 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-metadata" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.452260 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" containerName="nova-metadata-log" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.453412 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.467908 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.468207 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.485492 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.515802 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.515892 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.515994 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.516052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.516081 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.617272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.617966 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.617996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.618108 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.618168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.618563 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.624311 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.627403 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.632168 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.657186 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"nova-metadata-0\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " pod="openstack/nova-metadata-0" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.798421 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2edecf7f-fbdf-4907-ba28-33f70a58f37a" path="/var/lib/kubelet/pods/2edecf7f-fbdf-4907-ba28-33f70a58f37a/volumes" Jan 20 15:09:08 crc kubenswrapper[4949]: I0120 15:09:08.802221 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:09 crc kubenswrapper[4949]: W0120 15:09:09.271688 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode612b892_650e_4f7e_b7f9_70abcd671b83.slice/crio-1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251 WatchSource:0}: Error finding container 1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251: Status 404 returned error can't find the container with id 1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251 Jan 20 15:09:09 crc kubenswrapper[4949]: I0120 15:09:09.276204 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:09 crc kubenswrapper[4949]: I0120 15:09:09.311245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerStarted","Data":"1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.321026 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerStarted","Data":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.321615 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerStarted","Data":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.322767 4949 generic.go:334] "Generic (PLEG): container finished" podID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerID="ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec" exitCode=0 Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.322816 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerDied","Data":"ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec"} Jan 20 15:09:10 crc kubenswrapper[4949]: I0120 15:09:10.339788 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.339765464 podStartE2EDuration="2.339765464s" podCreationTimestamp="2026-01-20 15:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:10.338211265 +0000 UTC m=+1146.148042123" watchObservedRunningTime="2026-01-20 15:09:10.339765464 +0000 UTC m=+1146.149596322" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.331157 4949 generic.go:334] "Generic (PLEG): container finished" podID="883cbf80-263a-4fc7-b962-147019f05553" containerID="8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a" exitCode=0 Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.331260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerDied","Data":"8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a"} Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.614765 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.615215 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.665348 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783568 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783677 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783865 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.783981 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") pod \"5364ff4f-3ee5-4577-b82c-0c094bd55125\" (UID: \"5364ff4f-3ee5-4577-b82c-0c094bd55125\") " Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.791434 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8" (OuterVolumeSpecName: "kube-api-access-bxjj8") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "kube-api-access-bxjj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.795126 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts" (OuterVolumeSpecName: "scripts") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.814098 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data" (OuterVolumeSpecName: "config-data") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.816151 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5364ff4f-3ee5-4577-b82c-0c094bd55125" (UID: "5364ff4f-3ee5-4577-b82c-0c094bd55125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887177 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887207 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887218 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjj8\" (UniqueName: \"kubernetes.io/projected/5364ff4f-3ee5-4577-b82c-0c094bd55125-kube-api-access-bxjj8\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.887226 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5364ff4f-3ee5-4577-b82c-0c094bd55125-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.942178 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.987072 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:11 crc kubenswrapper[4949]: I0120 15:09:11.993444 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.076360 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.076643 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" containerID="cri-o://4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" gracePeriod=10 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.087296 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.087804 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" containerID="cri-o://62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.350814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pxxs" event={"ID":"5364ff4f-3ee5-4577-b82c-0c094bd55125","Type":"ContainerDied","Data":"bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8"} Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.351709 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2bccade4d04358a6a86705369f4d9ba1b92bfa4216a4996e55775face467a8" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.350834 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pxxs" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.353147 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerID="62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe" exitCode=2 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.354077 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerDied","Data":"62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe"} Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.450156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.572617 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.572880 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" containerID="cri-o://d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.573342 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" containerID="cri-o://92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.595204 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": EOF" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.595350 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": EOF" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.606541 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.606745 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" containerID="cri-o://d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.607117 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" containerID="cri-o://872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" gracePeriod=30 Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.945102 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:12 crc kubenswrapper[4949]: I0120 15:09:12.954913 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010553 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010597 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010638 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010688 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010726 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010761 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") pod \"883cbf80-263a-4fc7-b962-147019f05553\" (UID: \"883cbf80-263a-4fc7-b962-147019f05553\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010939 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.010968 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") pod \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\" (UID: \"f15e5c23-e5ed-49da-a675-b79a84acb3a5\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.026657 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx" (OuterVolumeSpecName: "kube-api-access-42zvx") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "kube-api-access-42zvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.037283 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl" (OuterVolumeSpecName: "kube-api-access-6t9zl") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "kube-api-access-6t9zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.038869 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts" (OuterVolumeSpecName: "scripts") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.059739 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.063340 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data" (OuterVolumeSpecName: "config-data") pod "883cbf80-263a-4fc7-b962-147019f05553" (UID: "883cbf80-263a-4fc7-b962-147019f05553"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.079309 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.092099 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.103821 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113653 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113713 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113725 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113748 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113764 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42zvx\" (UniqueName: \"kubernetes.io/projected/883cbf80-263a-4fc7-b962-147019f05553-kube-api-access-42zvx\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113781 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883cbf80-263a-4fc7-b962-147019f05553-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113794 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t9zl\" (UniqueName: \"kubernetes.io/projected/f15e5c23-e5ed-49da-a675-b79a84acb3a5-kube-api-access-6t9zl\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.113806 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.139448 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.140102 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config" (OuterVolumeSpecName: "config") pod "f15e5c23-e5ed-49da-a675-b79a84acb3a5" (UID: "f15e5c23-e5ed-49da-a675-b79a84acb3a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.216633 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15e5c23-e5ed-49da-a675-b79a84acb3a5-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.271250 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.313249 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.318455 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") pod \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\" (UID: \"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.324918 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt" (OuterVolumeSpecName: "kube-api-access-sncqt") pod "8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" (UID: "8b1e1042-2ebf-4d51-972d-8ebd6d8b4290"). InnerVolumeSpecName "kube-api-access-sncqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.371453 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8b1e1042-2ebf-4d51-972d-8ebd6d8b4290","Type":"ContainerDied","Data":"b6f194539b862d0ee8b6be35de75344541fb71d8b75e2a6809fe23930f272acc"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.371547 4949 scope.go:117] "RemoveContainer" containerID="62bdcf3b5bc8e4b64554a871ec18ef217094f715388b07151922297b140130fe" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.371665 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.380737 4949 generic.go:334] "Generic (PLEG): container finished" podID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" exitCode=0 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.380828 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerDied","Data":"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.380856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" event={"ID":"f15e5c23-e5ed-49da-a675-b79a84acb3a5","Type":"ContainerDied","Data":"052fba35240bac70130e0cfdaa3376b77a051a12b8b97d00e1f00afe30ca5b57"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.381193 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-8g4zv" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.391575 4949 generic.go:334] "Generic (PLEG): container finished" podID="bac9a094-8b7c-494a-9436-405785ad8097" containerID="d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32" exitCode=143 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.391649 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerDied","Data":"d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.398240 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" event={"ID":"883cbf80-263a-4fc7-b962-147019f05553","Type":"ContainerDied","Data":"c8b5ce4d167e29814242380f36417059a04a0bfe76bb946c5d3c88c545749a63"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.398289 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b5ce4d167e29814242380f36417059a04a0bfe76bb946c5d3c88c545749a63" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.398362 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n8g8k" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.415502 4949 scope.go:117] "RemoveContainer" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.434802 4949 generic.go:334] "Generic (PLEG): container finished" podID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" exitCode=0 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.434850 4949 generic.go:334] "Generic (PLEG): container finished" podID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" exitCode=143 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437424 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437719 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerDied","Data":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437749 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerDied","Data":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.437760 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e612b892-650e-4f7e-b7f9-70abcd671b83","Type":"ContainerDied","Data":"1e2bd1473eef6a3916d158d95bebf435136c05fcbff7e29b68928418f1f06251"} Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438630 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438700 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438784 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438840 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.438906 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") pod \"e612b892-650e-4f7e-b7f9-70abcd671b83\" (UID: \"e612b892-650e-4f7e-b7f9-70abcd671b83\") " Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.443256 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sncqt\" (UniqueName: \"kubernetes.io/projected/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290-kube-api-access-sncqt\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.445286 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs" (OuterVolumeSpecName: "logs") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.459634 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq" (OuterVolumeSpecName: "kube-api-access-jr2zq") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "kube-api-access-jr2zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.469491 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.482550 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.496747 4949 scope.go:117] "RemoveContainer" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.515979 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.522143 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532451 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532915 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="init" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532942 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="init" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532958 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532967 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532982 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.532990 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.532997 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533005 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.533019 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533027 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.533041 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerName="nova-manage" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533049 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerName="nova-manage" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.533081 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883cbf80-263a-4fc7-b962-147019f05553" containerName="nova-cell1-conductor-db-sync" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533089 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="883cbf80-263a-4fc7-b962-147019f05553" containerName="nova-cell1-conductor-db-sync" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533311 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" containerName="nova-manage" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533327 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-log" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533341 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" containerName="dnsmasq-dns" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533358 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" containerName="nova-metadata-metadata" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533374 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="883cbf80-263a-4fc7-b962-147019f05553" containerName="nova-cell1-conductor-db-sync" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.533389 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" containerName="kube-state-metrics" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.534104 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.536652 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.544215 4949 scope.go:117] "RemoveContainer" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.544305 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.545456 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b\": container with ID starting with 4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b not found: ID does not exist" containerID="4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.545570 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.545568 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b"} err="failed to get container status \"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b\": rpc error: code = NotFound desc = could not find container \"4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b\": container with ID starting with 4c24e3286c8c6eb826a79e5f4627f30ddcd906e6c88c68dc8adf9b7dfba10f5b not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.545603 4949 scope.go:117] "RemoveContainer" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.546052 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b\": container with ID starting with 65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b not found: ID does not exist" containerID="65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.546075 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b"} err="failed to get container status \"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b\": rpc error: code = NotFound desc = could not find container \"65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b\": container with ID starting with 65649bb6d4e4ef73ed4cf3fd8d6ffd690ce40a5598a39cc1645b3855562c440b not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.546117 4949 scope.go:117] "RemoveContainer" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547072 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr2zq\" (UniqueName: \"kubernetes.io/projected/e612b892-650e-4f7e-b7f9-70abcd671b83-kube-api-access-jr2zq\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547086 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547095 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612b892-650e-4f7e-b7f9-70abcd671b83-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.547104 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.548047 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.548085 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data" (OuterVolumeSpecName: "config-data") pod "e612b892-650e-4f7e-b7f9-70abcd671b83" (UID: "e612b892-650e-4f7e-b7f9-70abcd671b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.548178 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.553173 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.567355 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.578804 4949 scope.go:117] "RemoveContainer" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.578985 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.590571 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-8g4zv"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648565 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8nt2\" (UniqueName: \"kubernetes.io/projected/4a8d0e18-297d-407d-8c7c-64555052b960-kube-api-access-h8nt2\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648614 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648909 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.648987 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649034 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649060 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649133 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs459\" (UniqueName: \"kubernetes.io/projected/e19f25ae-0920-4573-9f2e-6447ca83e76c-kube-api-access-cs459\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.649196 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612b892-650e-4f7e-b7f9-70abcd671b83-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.661252 4949 scope.go:117] "RemoveContainer" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.661799 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": container with ID starting with 872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51 not found: ID does not exist" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.661858 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} err="failed to get container status \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": rpc error: code = NotFound desc = could not find container \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": container with ID starting with 872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51 not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.661892 4949 scope.go:117] "RemoveContainer" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: E0120 15:09:13.662276 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": container with ID starting with d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d not found: ID does not exist" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662307 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} err="failed to get container status \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": rpc error: code = NotFound desc = could not find container \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": container with ID starting with d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662330 4949 scope.go:117] "RemoveContainer" containerID="872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662595 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51"} err="failed to get container status \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": rpc error: code = NotFound desc = could not find container \"872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51\": container with ID starting with 872afcf6ad0a5b5e615a20517f75130ab5aa5957ee315c769b9a9ba379802d51 not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.662651 4949 scope.go:117] "RemoveContainer" containerID="d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.663092 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d"} err="failed to get container status \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": rpc error: code = NotFound desc = could not find container \"d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d\": container with ID starting with d5229cbd717dc1b76a9bda12a0aafcb3fb542cfd7ffe80174d0608989ff6649d not found: ID does not exist" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751053 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751136 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751168 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751242 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs459\" (UniqueName: \"kubernetes.io/projected/e19f25ae-0920-4573-9f2e-6447ca83e76c-kube-api-access-cs459\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751275 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8nt2\" (UniqueName: \"kubernetes.io/projected/4a8d0e18-297d-407d-8c7c-64555052b960-kube-api-access-h8nt2\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751301 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.751372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.764884 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.767506 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.768930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.774795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a8d0e18-297d-407d-8c7c-64555052b960-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.779162 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19f25ae-0920-4573-9f2e-6447ca83e76c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.779389 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8nt2\" (UniqueName: \"kubernetes.io/projected/4a8d0e18-297d-407d-8c7c-64555052b960-kube-api-access-h8nt2\") pod \"kube-state-metrics-0\" (UID: \"4a8d0e18-297d-407d-8c7c-64555052b960\") " pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.786397 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs459\" (UniqueName: \"kubernetes.io/projected/e19f25ae-0920-4573-9f2e-6447ca83e76c-kube-api-access-cs459\") pod \"nova-cell1-conductor-0\" (UID: \"e19f25ae-0920-4573-9f2e-6447ca83e76c\") " pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.824415 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.824902 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" containerID="cri-o://c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.825617 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" containerID="cri-o://c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.825684 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" containerID="cri-o://ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.826056 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" containerID="cri-o://aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6" gracePeriod=30 Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.864005 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.876635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.884892 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.911589 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.917728 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.919946 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.923927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.929160 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 15:09:13 crc kubenswrapper[4949]: I0120 15:09:13.946631 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.063840 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.063887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.063998 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.064021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.064048 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168455 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168499 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168560 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.168654 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.169126 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.176022 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.176885 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.178984 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.189080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"nova-metadata-0\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.302282 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.408630 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.446870 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a8d0e18-297d-407d-8c7c-64555052b960","Type":"ContainerStarted","Data":"86735712e59b167561983ba5424e8121510c647890a906b60418269a87464210"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.455927 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc" exitCode=0 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.455976 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd" exitCode=2 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.455985 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2" exitCode=0 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.456031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.456055 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.456065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2"} Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.461963 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" containerID="cri-o://8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" gracePeriod=30 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.528632 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 20 15:09:14 crc kubenswrapper[4949]: W0120 15:09:14.551878 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19f25ae_0920_4573_9f2e_6447ca83e76c.slice/crio-ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7 WatchSource:0}: Error finding container ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7: Status 404 returned error can't find the container with id ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7 Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.800572 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1e1042-2ebf-4d51-972d-8ebd6d8b4290" path="/var/lib/kubelet/pods/8b1e1042-2ebf-4d51-972d-8ebd6d8b4290/volumes" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.801762 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e612b892-650e-4f7e-b7f9-70abcd671b83" path="/var/lib/kubelet/pods/e612b892-650e-4f7e-b7f9-70abcd671b83/volumes" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.802658 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f15e5c23-e5ed-49da-a675-b79a84acb3a5" path="/var/lib/kubelet/pods/f15e5c23-e5ed-49da-a675-b79a84acb3a5/volumes" Jan 20 15:09:14 crc kubenswrapper[4949]: I0120 15:09:14.860109 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.472076 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerStarted","Data":"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.474374 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerStarted","Data":"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.474399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerStarted","Data":"b5f7d7790b27ef22d09958a0de70361e54da69dea5bf5665160ed80a276f0768"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.476155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e19f25ae-0920-4573-9f2e-6447ca83e76c","Type":"ContainerStarted","Data":"906adccbd92e2f2c3210b92cbe740b10245ff7c3835196b47b628a62d451b219"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.476183 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e19f25ae-0920-4573-9f2e-6447ca83e76c","Type":"ContainerStarted","Data":"ad5700433709aa9d496c8821a3ef96861d880cd620901905a7c0aed410929ef7"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.476793 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.478791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a8d0e18-297d-407d-8c7c-64555052b960","Type":"ContainerStarted","Data":"2b4e856bc66062e5a4ea5bebd7e327c9999e21fbee84a97312b41e90a73f495a"} Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.479409 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.499350 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.499320872 podStartE2EDuration="2.499320872s" podCreationTimestamp="2026-01-20 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:15.495330044 +0000 UTC m=+1151.305160902" watchObservedRunningTime="2026-01-20 15:09:15.499320872 +0000 UTC m=+1151.309151780" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.527479 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.132991257 podStartE2EDuration="2.527455361s" podCreationTimestamp="2026-01-20 15:09:13 +0000 UTC" firstStartedPulling="2026-01-20 15:09:14.420424979 +0000 UTC m=+1150.230255837" lastFinishedPulling="2026-01-20 15:09:14.814889083 +0000 UTC m=+1150.624719941" observedRunningTime="2026-01-20 15:09:15.514618012 +0000 UTC m=+1151.324448870" watchObservedRunningTime="2026-01-20 15:09:15.527455361 +0000 UTC m=+1151.337286239" Jan 20 15:09:15 crc kubenswrapper[4949]: I0120 15:09:15.547768 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.54774993 podStartE2EDuration="2.54774993s" podCreationTimestamp="2026-01-20 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:15.532455701 +0000 UTC m=+1151.342286569" watchObservedRunningTime="2026-01-20 15:09:15.54774993 +0000 UTC m=+1151.357580788" Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.944196 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.946071 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.947283 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 20 15:09:16 crc kubenswrapper[4949]: E0120 15:09:16.947342 4949 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.499471 4949 generic.go:334] "Generic (PLEG): container finished" podID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerID="aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6" exitCode=0 Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.499531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6"} Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.630471 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755064 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755148 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755223 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755293 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755407 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.755499 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") pod \"aded75a0-687f-4b2c-a437-d170b095dfa1\" (UID: \"aded75a0-687f-4b2c-a437-d170b095dfa1\") " Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.757597 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.757785 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.761965 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts" (OuterVolumeSpecName: "scripts") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.763336 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm" (OuterVolumeSpecName: "kube-api-access-jm8bm") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "kube-api-access-jm8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.787845 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.838899 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.854340 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data" (OuterVolumeSpecName: "config-data") pod "aded75a0-687f-4b2c-a437-d170b095dfa1" (UID: "aded75a0-687f-4b2c-a437-d170b095dfa1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857317 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857352 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm8bm\" (UniqueName: \"kubernetes.io/projected/aded75a0-687f-4b2c-a437-d170b095dfa1-kube-api-access-jm8bm\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857362 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857372 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857379 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aded75a0-687f-4b2c-a437-d170b095dfa1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857387 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:17 crc kubenswrapper[4949]: I0120 15:09:17.857395 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aded75a0-687f-4b2c-a437-d170b095dfa1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.396591 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.473045 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") pod \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.473506 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") pod \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.473813 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") pod \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\" (UID: \"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7\") " Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.479927 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng" (OuterVolumeSpecName: "kube-api-access-2j2ng") pod "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" (UID: "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7"). InnerVolumeSpecName "kube-api-access-2j2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.497594 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" (UID: "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.507442 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data" (OuterVolumeSpecName: "config-data") pod "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" (UID: "fd3dc9fa-0768-4d5d-bbe8-812388ebabf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.515440 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aded75a0-687f-4b2c-a437-d170b095dfa1","Type":"ContainerDied","Data":"4f2aed2e596cd1bd862b2f82034c64d55398a1a3e33e11badb9434caf9476fdf"} Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.515493 4949 scope.go:117] "RemoveContainer" containerID="c8a2687d37477d4fdc0e08fbca281bf4f3609a5b84d3dc3e48c83dc2da9356fc" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.515855 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536101 4949 generic.go:334] "Generic (PLEG): container finished" podID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" exitCode=0 Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536145 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerDied","Data":"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e"} Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536172 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd3dc9fa-0768-4d5d-bbe8-812388ebabf7","Type":"ContainerDied","Data":"ef147b3312681f076212a3015e0ad64a68c1c03d862796fdda0afd0a1fb1356b"} Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.536218 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.570876 4949 scope.go:117] "RemoveContainer" containerID="ff954d25174743b7fc9a5f409f9ec492be2694bd88940020115cc1df2d2182dd" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.576544 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j2ng\" (UniqueName: \"kubernetes.io/projected/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-kube-api-access-2j2ng\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.576583 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.576597 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.583906 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.607615 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.607858 4949 scope.go:117] "RemoveContainer" containerID="aaa3ab52efe26d7038d228d19bf8770524096160e328bdbc8747e2fbc92e0cf6" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.634153 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.640557 4949 scope.go:117] "RemoveContainer" containerID="c15e9f32e338c7525386d2ec8fb1ff4e65d26692dac51a5619d7736eac96a1a2" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.661394 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.677699 4949 scope.go:117] "RemoveContainer" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680463 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680816 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680831 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680841 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680847 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680854 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680860 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680870 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680876 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.680899 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.680905 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681076 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-notification-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681092 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="proxy-httpd" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681106 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" containerName="nova-scheduler-scheduler" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681119 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="sg-core" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681128 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" containerName="ceilometer-central-agent" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.681938 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.686846 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.701412 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.714254 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.715652 4949 scope.go:117] "RemoveContainer" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" Jan 20 15:09:18 crc kubenswrapper[4949]: E0120 15:09:18.716066 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e\": container with ID starting with 8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e not found: ID does not exist" containerID="8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.716111 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e"} err="failed to get container status \"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e\": rpc error: code = NotFound desc = could not find container \"8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e\": container with ID starting with 8c9ea94d401ac90d42df9c9c5127516d4d61a6c97d4ecb9d5ee6e9a9620a599e not found: ID does not exist" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.716567 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.720292 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.720629 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.720922 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.728467 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.779890 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.779948 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780203 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780280 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780321 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780367 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780590 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780813 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.780931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.801047 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aded75a0-687f-4b2c-a437-d170b095dfa1" path="/var/lib/kubelet/pods/aded75a0-687f-4b2c-a437-d170b095dfa1/volumes" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.801941 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd3dc9fa-0768-4d5d-bbe8-812388ebabf7" path="/var/lib/kubelet/pods/fd3dc9fa-0768-4d5d-bbe8-812388ebabf7/volumes" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882197 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882219 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882283 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882397 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882481 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882656 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.882691 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.883221 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.883471 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.886764 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.886781 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.886909 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.888244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.888824 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.888965 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.889578 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.898096 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"nova-scheduler-0\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:18 crc kubenswrapper[4949]: I0120 15:09:18.899729 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"ceilometer-0\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " pod="openstack/ceilometer-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.006129 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.038100 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.303358 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.303912 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:09:19 crc kubenswrapper[4949]: W0120 15:09:19.475189 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88614638_70cb_4bcf_a017_bb7dbe17f962.slice/crio-5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e WatchSource:0}: Error finding container 5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e: Status 404 returned error can't find the container with id 5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.475988 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.550426 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerStarted","Data":"5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e"} Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556510 4949 generic.go:334] "Generic (PLEG): container finished" podID="bac9a094-8b7c-494a-9436-405785ad8097" containerID="92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5" exitCode=0 Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556651 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerDied","Data":"92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5"} Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bac9a094-8b7c-494a-9436-405785ad8097","Type":"ContainerDied","Data":"cf270cf5a5820c777ad79aaecd34efdf73ed36872e34df88df821f17776a6fb7"} Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.556691 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf270cf5a5820c777ad79aaecd34efdf73ed36872e34df88df821f17776a6fb7" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.611121 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:19 crc kubenswrapper[4949]: W0120 15:09:19.613607 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb896cb7c_63d5_4b9d_af2c_bfb89b07100c.slice/crio-fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12 WatchSource:0}: Error finding container fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12: Status 404 returned error can't find the container with id fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12 Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.656946 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.817807 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818644 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs" (OuterVolumeSpecName: "logs") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818688 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.818726 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") pod \"bac9a094-8b7c-494a-9436-405785ad8097\" (UID: \"bac9a094-8b7c-494a-9436-405785ad8097\") " Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.819626 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bac9a094-8b7c-494a-9436-405785ad8097-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.825747 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9" (OuterVolumeSpecName: "kube-api-access-nlbs9") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "kube-api-access-nlbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.848367 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data" (OuterVolumeSpecName: "config-data") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.851433 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bac9a094-8b7c-494a-9436-405785ad8097" (UID: "bac9a094-8b7c-494a-9436-405785ad8097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.923857 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbs9\" (UniqueName: \"kubernetes.io/projected/bac9a094-8b7c-494a-9436-405785ad8097-kube-api-access-nlbs9\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.923891 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:19 crc kubenswrapper[4949]: I0120 15:09:19.923900 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bac9a094-8b7c-494a-9436-405785ad8097-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.572572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerStarted","Data":"14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e"} Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.579483 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.583653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e"} Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.583725 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12"} Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.596929 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.59689828 podStartE2EDuration="2.59689828s" podCreationTimestamp="2026-01-20 15:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:20.592529671 +0000 UTC m=+1156.402360539" watchObservedRunningTime="2026-01-20 15:09:20.59689828 +0000 UTC m=+1156.406729178" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.618392 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.630836 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.638117 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: E0120 15:09:20.638702 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.638736 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" Jan 20 15:09:20 crc kubenswrapper[4949]: E0120 15:09:20.638768 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.638782 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.639100 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-log" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.639144 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac9a094-8b7c-494a-9436-405785ad8097" containerName="nova-api-api" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.640791 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.643235 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.645952 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.735735 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.735810 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.735844 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.736359 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.798612 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac9a094-8b7c-494a-9436-405785ad8097" path="/var/lib/kubelet/pods/bac9a094-8b7c-494a-9436-405785ad8097/volumes" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.837921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.837980 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.838025 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.838452 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.838626 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.842573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.856488 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.857503 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"nova-api-0\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " pod="openstack/nova-api-0" Jan 20 15:09:20 crc kubenswrapper[4949]: I0120 15:09:20.986424 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:21 crc kubenswrapper[4949]: I0120 15:09:21.566438 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:21 crc kubenswrapper[4949]: W0120 15:09:21.567658 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bd6ec41_c953_4165_a562_7d02937f0974.slice/crio-4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc WatchSource:0}: Error finding container 4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc: Status 404 returned error can't find the container with id 4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc Jan 20 15:09:21 crc kubenswrapper[4949]: I0120 15:09:21.597773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437"} Jan 20 15:09:21 crc kubenswrapper[4949]: I0120 15:09:21.599594 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerStarted","Data":"4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.609192 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.610824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerStarted","Data":"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.610914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerStarted","Data":"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c"} Jan 20 15:09:22 crc kubenswrapper[4949]: I0120 15:09:22.630393 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.630365924 podStartE2EDuration="2.630365924s" podCreationTimestamp="2026-01-20 15:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:22.626891123 +0000 UTC m=+1158.436722001" watchObservedRunningTime="2026-01-20 15:09:22.630365924 +0000 UTC m=+1158.440196792" Jan 20 15:09:23 crc kubenswrapper[4949]: I0120 15:09:23.894156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 20 15:09:23 crc kubenswrapper[4949]: I0120 15:09:23.900389 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.007635 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.302995 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.303043 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.625471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerStarted","Data":"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb"} Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.626475 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:09:24 crc kubenswrapper[4949]: I0120 15:09:24.669564 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.652027822 podStartE2EDuration="6.66954317s" podCreationTimestamp="2026-01-20 15:09:18 +0000 UTC" firstStartedPulling="2026-01-20 15:09:19.61637663 +0000 UTC m=+1155.426207488" lastFinishedPulling="2026-01-20 15:09:23.633891978 +0000 UTC m=+1159.443722836" observedRunningTime="2026-01-20 15:09:24.666843703 +0000 UTC m=+1160.476674561" watchObservedRunningTime="2026-01-20 15:09:24.66954317 +0000 UTC m=+1160.479374038" Jan 20 15:09:25 crc kubenswrapper[4949]: I0120 15:09:25.315697 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:25 crc kubenswrapper[4949]: I0120 15:09:25.315720 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.152461 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.152560 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.152618 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.153463 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.153571 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b" gracePeriod=600 Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.671469 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b" exitCode=0 Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.671979 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b"} Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.672005 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e"} Jan 20 15:09:27 crc kubenswrapper[4949]: I0120 15:09:27.672021 4949 scope.go:117] "RemoveContainer" containerID="a9f2254803a3339bd5948184ba1d6e5f7906b8737b4fd39cf0395a4f1a0c84cf" Jan 20 15:09:29 crc kubenswrapper[4949]: I0120 15:09:29.006845 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 15:09:29 crc kubenswrapper[4949]: I0120 15:09:29.052258 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 15:09:29 crc kubenswrapper[4949]: I0120 15:09:29.723375 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 15:09:30 crc kubenswrapper[4949]: I0120 15:09:30.987856 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:30 crc kubenswrapper[4949]: I0120 15:09:30.987933 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:09:32 crc kubenswrapper[4949]: I0120 15:09:32.070816 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:32 crc kubenswrapper[4949]: I0120 15:09:32.070884 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.311117 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.314156 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.320487 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:09:34 crc kubenswrapper[4949]: I0120 15:09:34.742909 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.651225 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779307 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") pod \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779481 4949 generic.go:334] "Generic (PLEG): container finished" podID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" exitCode=137 Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779563 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779578 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerDied","Data":"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac"} Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779660 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2db0feee-11b2-4926-a0c9-2b3f39743fa3","Type":"ContainerDied","Data":"5cb3a881d57bf4b8f9df00f1f35e15df03a87b71e581bef18db93f66a9512764"} Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.779684 4949 scope.go:117] "RemoveContainer" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.780282 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") pod \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.780387 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") pod \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\" (UID: \"2db0feee-11b2-4926-a0c9-2b3f39743fa3\") " Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.784565 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd" (OuterVolumeSpecName: "kube-api-access-msjgd") pod "2db0feee-11b2-4926-a0c9-2b3f39743fa3" (UID: "2db0feee-11b2-4926-a0c9-2b3f39743fa3"). InnerVolumeSpecName "kube-api-access-msjgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.803227 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data" (OuterVolumeSpecName: "config-data") pod "2db0feee-11b2-4926-a0c9-2b3f39743fa3" (UID: "2db0feee-11b2-4926-a0c9-2b3f39743fa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.804874 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2db0feee-11b2-4926-a0c9-2b3f39743fa3" (UID: "2db0feee-11b2-4926-a0c9-2b3f39743fa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.876470 4949 scope.go:117] "RemoveContainer" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" Jan 20 15:09:36 crc kubenswrapper[4949]: E0120 15:09:36.877020 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac\": container with ID starting with 4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac not found: ID does not exist" containerID="4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.877067 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac"} err="failed to get container status \"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac\": rpc error: code = NotFound desc = could not find container \"4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac\": container with ID starting with 4278953c2824773b0c314c7651a9c2ca5bc4c0cd5840d22642bec26102e9acac not found: ID does not exist" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.883022 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.883056 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msjgd\" (UniqueName: \"kubernetes.io/projected/2db0feee-11b2-4926-a0c9-2b3f39743fa3-kube-api-access-msjgd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:36 crc kubenswrapper[4949]: I0120 15:09:36.883070 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2db0feee-11b2-4926-a0c9-2b3f39743fa3-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.109040 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.117879 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.140233 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: E0120 15:09:37.140987 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.141007 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.141275 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" containerName="nova-cell1-novncproxy-novncproxy" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.142223 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.145343 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.145624 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.145852 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.149661 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289457 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289620 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.289887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzggd\" (UniqueName: \"kubernetes.io/projected/16e90cac-28e0-4d75-a613-d77c9263f634-kube-api-access-tzggd\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.391783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.391870 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.391907 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.392017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzggd\" (UniqueName: \"kubernetes.io/projected/16e90cac-28e0-4d75-a613-d77c9263f634-kube-api-access-tzggd\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.392101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.397829 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.398275 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.399018 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.401236 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e90cac-28e0-4d75-a613-d77c9263f634-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.412296 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzggd\" (UniqueName: \"kubernetes.io/projected/16e90cac-28e0-4d75-a613-d77c9263f634-kube-api-access-tzggd\") pod \"nova-cell1-novncproxy-0\" (UID: \"16e90cac-28e0-4d75-a613-d77c9263f634\") " pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.464425 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:37 crc kubenswrapper[4949]: I0120 15:09:37.987084 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 20 15:09:37 crc kubenswrapper[4949]: W0120 15:09:37.999813 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16e90cac_28e0_4d75_a613_d77c9263f634.slice/crio-7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b WatchSource:0}: Error finding container 7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b: Status 404 returned error can't find the container with id 7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.800632 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db0feee-11b2-4926-a0c9-2b3f39743fa3" path="/var/lib/kubelet/pods/2db0feee-11b2-4926-a0c9-2b3f39743fa3/volumes" Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.808969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16e90cac-28e0-4d75-a613-d77c9263f634","Type":"ContainerStarted","Data":"8ec93c74147135c69c526ad1b2d444f4f7f6b480e4f2d0084776ed862dd750e9"} Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.809009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16e90cac-28e0-4d75-a613-d77c9263f634","Type":"ContainerStarted","Data":"7fc45aac4e91f35ac3246faedc4050e2b58e2ec38d9d75dc20ccb31e5ad6df8b"} Jan 20 15:09:38 crc kubenswrapper[4949]: I0120 15:09:38.841627 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.841603586 podStartE2EDuration="1.841603586s" podCreationTimestamp="2026-01-20 15:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:38.838183037 +0000 UTC m=+1174.648013915" watchObservedRunningTime="2026-01-20 15:09:38.841603586 +0000 UTC m=+1174.651434464" Jan 20 15:09:40 crc kubenswrapper[4949]: I0120 15:09:40.993085 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:09:40 crc kubenswrapper[4949]: I0120 15:09:40.994188 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:09:40 crc kubenswrapper[4949]: I0120 15:09:40.999233 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:09:41 crc kubenswrapper[4949]: I0120 15:09:41.015483 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:09:41 crc kubenswrapper[4949]: I0120 15:09:41.841477 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:09:41 crc kubenswrapper[4949]: I0120 15:09:41.844328 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.035804 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.037221 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.049973 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161145 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161203 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161237 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161256 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.161608 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263467 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263542 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.263656 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.264731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.264973 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.265025 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.265255 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.283717 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"dnsmasq-dns-68d4b6d797-zznrk\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.365431 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.466612 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:42 crc kubenswrapper[4949]: I0120 15:09:42.932733 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:09:43 crc kubenswrapper[4949]: I0120 15:09:43.871281 4949 generic.go:334] "Generic (PLEG): container finished" podID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerID="70b240f3fe0404274eea1d589f15c7d987d02877fd9ababf09b1f0ab34e25351" exitCode=0 Jan 20 15:09:43 crc kubenswrapper[4949]: I0120 15:09:43.871506 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerDied","Data":"70b240f3fe0404274eea1d589f15c7d987d02877fd9ababf09b1f0ab34e25351"} Jan 20 15:09:43 crc kubenswrapper[4949]: I0120 15:09:43.872878 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerStarted","Data":"55f29ec4bac4ac4376fe3452c37cd668b9f8ffe67866fcc276100056a1141b3d"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.278297 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279214 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" containerID="cri-o://28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279383 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" containerID="cri-o://8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279355 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" containerID="cri-o://dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.279407 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" containerID="cri-o://61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.292177 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.578478 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881256 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" exitCode=0 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881285 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" exitCode=2 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881293 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" exitCode=0 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881324 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.881356 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.882717 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" containerID="cri-o://036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.883473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerStarted","Data":"3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107"} Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.883499 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.883752 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" containerID="cri-o://108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" gracePeriod=30 Jan 20 15:09:44 crc kubenswrapper[4949]: I0120 15:09:44.907693 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" podStartSLOduration=2.9076739099999998 podStartE2EDuration="2.90767391s" podCreationTimestamp="2026-01-20 15:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:44.905001664 +0000 UTC m=+1180.714832522" watchObservedRunningTime="2026-01-20 15:09:44.90767391 +0000 UTC m=+1180.717504768" Jan 20 15:09:45 crc kubenswrapper[4949]: I0120 15:09:45.893829 4949 generic.go:334] "Generic (PLEG): container finished" podID="8bd6ec41-c953-4165-a562-7d02937f0974" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" exitCode=143 Jan 20 15:09:45 crc kubenswrapper[4949]: I0120 15:09:45.894088 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerDied","Data":"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c"} Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.465265 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.488700 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.556364 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661530 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661564 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661626 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661738 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.661869 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") pod \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\" (UID: \"b896cb7c-63d5-4b9d-af2c-bfb89b07100c\") " Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.662712 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.662968 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.668383 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d" (OuterVolumeSpecName: "kube-api-access-mst5d") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "kube-api-access-mst5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.685604 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts" (OuterVolumeSpecName: "scripts") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.691907 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.715419 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.742663 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764421 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mst5d\" (UniqueName: \"kubernetes.io/projected/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-kube-api-access-mst5d\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764469 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764482 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764490 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764498 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764506 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.764537 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.766691 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data" (OuterVolumeSpecName: "config-data") pod "b896cb7c-63d5-4b9d-af2c-bfb89b07100c" (UID: "b896cb7c-63d5-4b9d-af2c-bfb89b07100c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.865892 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b896cb7c-63d5-4b9d-af2c-bfb89b07100c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.913910 4949 generic.go:334] "Generic (PLEG): container finished" podID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" exitCode=0 Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.913993 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.914013 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437"} Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.914057 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b896cb7c-63d5-4b9d-af2c-bfb89b07100c","Type":"ContainerDied","Data":"fc9d85b659e1b531803c541361ef453d29be511e9d5098054ef1c2cf3a38ba12"} Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.914078 4949 scope.go:117] "RemoveContainer" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.938106 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.956315 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.961673 4949 scope.go:117] "RemoveContainer" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.976501 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994184 4949 scope.go:117] "RemoveContainer" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994294 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994622 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994633 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994646 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994652 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994663 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994669 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: E0120 15:09:47.994684 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994689 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994903 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-notification-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994912 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="proxy-httpd" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994923 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="ceilometer-central-agent" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.994935 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" containerName="sg-core" Jan 20 15:09:47 crc kubenswrapper[4949]: I0120 15:09:47.996414 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.005063 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.005319 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.005425 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.025172 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.044054 4949 scope.go:117] "RemoveContainer" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.073338 4949 scope.go:117] "RemoveContainer" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.074704 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb\": container with ID starting with 8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb not found: ID does not exist" containerID="8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.074746 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb"} err="failed to get container status \"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb\": rpc error: code = NotFound desc = could not find container \"8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb\": container with ID starting with 8abba08fe878cccc5e33fcf41cee79c907f4fc12d0e779e40d4a22e04fdceffb not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.074794 4949 scope.go:117] "RemoveContainer" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075664 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075786 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075810 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075829 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075879 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.075903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.076395 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db\": container with ID starting with dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db not found: ID does not exist" containerID="dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076425 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db"} err="failed to get container status \"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db\": rpc error: code = NotFound desc = could not find container \"dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db\": container with ID starting with dd815a4823278c2b2bbc2a75618f0bd5b776a7319ed82870e14b35a980d211db not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076446 4949 scope.go:117] "RemoveContainer" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.076734 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437\": container with ID starting with 61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437 not found: ID does not exist" containerID="61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076764 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437"} err="failed to get container status \"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437\": rpc error: code = NotFound desc = could not find container \"61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437\": container with ID starting with 61f0c760867c2d6a0266ad521b53ea8713acbc4d42f873dd89d5678424b73437 not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.076782 4949 scope.go:117] "RemoveContainer" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.076979 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e\": container with ID starting with 28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e not found: ID does not exist" containerID="28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.077004 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e"} err="failed to get container status \"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e\": rpc error: code = NotFound desc = could not find container \"28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e\": container with ID starting with 28db128c9ad8d8e119ff1c053cbb9e3e292cbb63125b65a9edecdd9e04e99a4e not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.146446 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.147780 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.156910 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.157103 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.159890 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.177448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.177894 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178400 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178453 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178539 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178592 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178645 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.178727 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.179251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.179615 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.181717 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.182219 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.183076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.183765 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.185244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.195900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"ceilometer-0\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.280771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.280979 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.281218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.281258 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.381674 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.383875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.383946 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.384060 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.384091 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.389592 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.390578 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.393731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.411509 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"nova-cell1-cell-mapping-jgctz\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.484292 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.623762 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.688781 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.688852 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.688963 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.689055 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.690341 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs" (OuterVolumeSpecName: "logs") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.696831 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd" (OuterVolumeSpecName: "kube-api-access-b45bd") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "kube-api-access-b45bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.716369 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data podName:8bd6ec41-c953-4165-a562-7d02937f0974 nodeName:}" failed. No retries permitted until 2026-01-20 15:09:49.216343125 +0000 UTC m=+1185.026173983 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974") : error deleting /var/lib/kubelet/pods/8bd6ec41-c953-4165-a562-7d02937f0974/volume-subpaths: remove /var/lib/kubelet/pods/8bd6ec41-c953-4165-a562-7d02937f0974/volume-subpaths: no such file or directory Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.719459 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.791530 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bd6ec41-c953-4165-a562-7d02937f0974-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.791559 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/8bd6ec41-c953-4165-a562-7d02937f0974-kube-api-access-b45bd\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.791572 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.801345 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b896cb7c-63d5-4b9d-af2c-bfb89b07100c" path="/var/lib/kubelet/pods/b896cb7c-63d5-4b9d-af2c-bfb89b07100c/volumes" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.881742 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.930435 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"8f0dd94a9e63de42a5122bf4ccb941587cc9b12585cbfa4f431123811ef49ec3"} Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934580 4949 generic.go:334] "Generic (PLEG): container finished" podID="8bd6ec41-c953-4165-a562-7d02937f0974" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" exitCode=0 Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934636 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934695 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerDied","Data":"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08"} Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934728 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bd6ec41-c953-4165-a562-7d02937f0974","Type":"ContainerDied","Data":"4c85dccdb57dfe74c71b4cabaa20f1daeb9555e921083768364b44b2426c43bc"} Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.934748 4949 scope.go:117] "RemoveContainer" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.958041 4949 scope.go:117] "RemoveContainer" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.980435 4949 scope.go:117] "RemoveContainer" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.980841 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08\": container with ID starting with 108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08 not found: ID does not exist" containerID="108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.980876 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08"} err="failed to get container status \"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08\": rpc error: code = NotFound desc = could not find container \"108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08\": container with ID starting with 108682e1d345cad772bf2a0653b040415af8de1e3b34cefd0d2e8ba8d7efaa08 not found: ID does not exist" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.980902 4949 scope.go:117] "RemoveContainer" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" Jan 20 15:09:48 crc kubenswrapper[4949]: E0120 15:09:48.981289 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c\": container with ID starting with 036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c not found: ID does not exist" containerID="036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c" Jan 20 15:09:48 crc kubenswrapper[4949]: I0120 15:09:48.981330 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c"} err="failed to get container status \"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c\": rpc error: code = NotFound desc = could not find container \"036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c\": container with ID starting with 036834dd58a34526e0883aa99eb10c74bd1dba8287d5b1037637d1322b46fc1c not found: ID does not exist" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.049631 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:09:49 crc kubenswrapper[4949]: W0120 15:09:49.053743 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462eb38e_1d62_43e2_92c4_1074a1c054b9.slice/crio-3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5 WatchSource:0}: Error finding container 3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5: Status 404 returned error can't find the container with id 3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5 Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.300110 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") pod \"8bd6ec41-c953-4165-a562-7d02937f0974\" (UID: \"8bd6ec41-c953-4165-a562-7d02937f0974\") " Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.305374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data" (OuterVolumeSpecName: "config-data") pod "8bd6ec41-c953-4165-a562-7d02937f0974" (UID: "8bd6ec41-c953-4165-a562-7d02937f0974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.402970 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd6ec41-c953-4165-a562-7d02937f0974-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.571577 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.584262 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.602725 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: E0120 15:09:49.603119 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603136 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" Jan 20 15:09:49 crc kubenswrapper[4949]: E0120 15:09:49.603162 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603169 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603318 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-api" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.603338 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" containerName="nova-api-log" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.604402 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.609193 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.610212 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.610389 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.610487 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707605 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707858 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.707876 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809402 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809718 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.809999 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.810137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.810248 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.815387 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.819065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.819474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.820174 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.820260 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.832091 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"nova-api-0\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.922820 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.945532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1"} Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.947449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerStarted","Data":"4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640"} Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.947495 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerStarted","Data":"3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5"} Jan 20 15:09:49 crc kubenswrapper[4949]: I0120 15:09:49.967755 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jgctz" podStartSLOduration=1.9677324600000001 podStartE2EDuration="1.96773246s" podCreationTimestamp="2026-01-20 15:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:49.962361098 +0000 UTC m=+1185.772191966" watchObservedRunningTime="2026-01-20 15:09:49.96773246 +0000 UTC m=+1185.777563338" Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.398977 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:50 crc kubenswrapper[4949]: W0120 15:09:50.413416 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff5c5235_abcc_4af7_b9ee_c9eacb8c2104.slice/crio-1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513 WatchSource:0}: Error finding container 1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513: Status 404 returned error can't find the container with id 1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513 Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.807359 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd6ec41-c953-4165-a562-7d02937f0974" path="/var/lib/kubelet/pods/8bd6ec41-c953-4165-a562-7d02937f0974/volumes" Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.959419 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerStarted","Data":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.959470 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerStarted","Data":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.959484 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerStarted","Data":"1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.962834 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e"} Jan 20 15:09:50 crc kubenswrapper[4949]: I0120 15:09:50.980543 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9805083300000002 podStartE2EDuration="1.98050833s" podCreationTimestamp="2026-01-20 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:09:50.977286807 +0000 UTC m=+1186.787117675" watchObservedRunningTime="2026-01-20 15:09:50.98050833 +0000 UTC m=+1186.790339188" Jan 20 15:09:52 crc kubenswrapper[4949]: I0120 15:09:52.367894 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:09:52 crc kubenswrapper[4949]: I0120 15:09:52.454862 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:52 crc kubenswrapper[4949]: I0120 15:09:52.455351 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" containerID="cri-o://044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7" gracePeriod=10 Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:52.999762 4949 generic.go:334] "Generic (PLEG): container finished" podID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerID="044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7" exitCode=0 Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.000089 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerDied","Data":"044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7"} Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.000119 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" event={"ID":"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791","Type":"ContainerDied","Data":"23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c"} Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.000131 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f9e3256757fecc1e46aaea1db5076c65c5e2184c7aae2f19b0bcf2ca99222c" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.052528 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086538 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086639 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086723 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.086934 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") pod \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\" (UID: \"ae2f6e22-4c5a-4d30-95a8-0cacc9f21791\") " Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.119708 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g" (OuterVolumeSpecName: "kube-api-access-l292g") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "kube-api-access-l292g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.161274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.184846 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config" (OuterVolumeSpecName: "config") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.188606 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l292g\" (UniqueName: \"kubernetes.io/projected/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-kube-api-access-l292g\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.188638 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.188648 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.200457 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.216467 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" (UID: "ae2f6e22-4c5a-4d30-95a8-0cacc9f21791"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.295557 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:53 crc kubenswrapper[4949]: I0120 15:09:53.295895 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.011884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a"} Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.011909 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-vr8t6" Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.051095 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.059833 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-vr8t6"] Jan 20 15:09:54 crc kubenswrapper[4949]: I0120 15:09:54.808647 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" path="/var/lib/kubelet/pods/ae2f6e22-4c5a-4d30-95a8-0cacc9f21791/volumes" Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.021377 4949 generic.go:334] "Generic (PLEG): container finished" podID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerID="4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640" exitCode=0 Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.021465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerDied","Data":"4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640"} Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.025489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerStarted","Data":"423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6"} Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.025699 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:09:55 crc kubenswrapper[4949]: I0120 15:09:55.068686 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.440142102 podStartE2EDuration="8.068667185s" podCreationTimestamp="2026-01-20 15:09:47 +0000 UTC" firstStartedPulling="2026-01-20 15:09:48.892343859 +0000 UTC m=+1184.702174717" lastFinishedPulling="2026-01-20 15:09:54.520868912 +0000 UTC m=+1190.330699800" observedRunningTime="2026-01-20 15:09:55.061037822 +0000 UTC m=+1190.870868680" watchObservedRunningTime="2026-01-20 15:09:55.068667185 +0000 UTC m=+1190.878498043" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.458632 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567003 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567083 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567135 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.567244 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") pod \"462eb38e-1d62-43e2-92c4-1074a1c054b9\" (UID: \"462eb38e-1d62-43e2-92c4-1074a1c054b9\") " Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.573065 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n" (OuterVolumeSpecName: "kube-api-access-bgv8n") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "kube-api-access-bgv8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.573886 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts" (OuterVolumeSpecName: "scripts") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.593684 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.597708 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data" (OuterVolumeSpecName: "config-data") pod "462eb38e-1d62-43e2-92c4-1074a1c054b9" (UID: "462eb38e-1d62-43e2-92c4-1074a1c054b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669754 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgv8n\" (UniqueName: \"kubernetes.io/projected/462eb38e-1d62-43e2-92c4-1074a1c054b9-kube-api-access-bgv8n\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669806 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669825 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:56 crc kubenswrapper[4949]: I0120 15:09:56.669842 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462eb38e-1d62-43e2-92c4-1074a1c054b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.046511 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jgctz" event={"ID":"462eb38e-1d62-43e2-92c4-1074a1c054b9","Type":"ContainerDied","Data":"3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5"} Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.046598 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d354fb3633c227a26abaa18077c063996f1229ddd22194e4f70ed9cc5e1cac5" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.046622 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jgctz" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.250013 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.250666 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" containerID="cri-o://14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.268376 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.268664 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" containerID="cri-o://02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.268739 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" containerID="cri-o://d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.279967 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.280361 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" containerID="cri-o://62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.280650 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" containerID="cri-o://3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" gracePeriod=30 Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.929613 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992660 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992704 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992795 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992855 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992940 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.992959 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") pod \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\" (UID: \"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104\") " Jan 20 15:09:57 crc kubenswrapper[4949]: I0120 15:09:57.997143 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs" (OuterVolumeSpecName: "logs") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.002237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq" (OuterVolumeSpecName: "kube-api-access-k6lvq") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "kube-api-access-k6lvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.030686 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data" (OuterVolumeSpecName: "config-data") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.038382 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.081314 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.082889 4949 generic.go:334] "Generic (PLEG): container finished" podID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" exitCode=143 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.083090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerDied","Data":"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.084237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" (UID: "ff5c5235-abcc-4af7-b9ee-c9eacb8c2104"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.084744 4949 generic.go:334] "Generic (PLEG): container finished" podID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerID="14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e" exitCode=0 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.084824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerDied","Data":"14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086660 4949 generic.go:334] "Generic (PLEG): container finished" podID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" exitCode=0 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086683 4949 generic.go:334] "Generic (PLEG): container finished" podID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" exitCode=143 Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086704 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerDied","Data":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086728 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerDied","Data":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff5c5235-abcc-4af7-b9ee-c9eacb8c2104","Type":"ContainerDied","Data":"1aeb914f4fb5d9b7c3f70d0fed98fe9403497b4b62de91b6a8b38528bcc34513"} Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086767 4949 scope.go:117] "RemoveContainer" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.086922 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095019 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6lvq\" (UniqueName: \"kubernetes.io/projected/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-kube-api-access-k6lvq\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095061 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095070 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095080 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095091 4949 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.095100 4949 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.110972 4949 scope.go:117] "RemoveContainer" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.136645 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.144648 4949 scope.go:117] "RemoveContainer" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.145318 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.145932 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": container with ID starting with d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7 not found: ID does not exist" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.146046 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} err="failed to get container status \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": rpc error: code = NotFound desc = could not find container \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": container with ID starting with d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.146145 4949 scope.go:117] "RemoveContainer" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.147041 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": container with ID starting with 02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94 not found: ID does not exist" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.147358 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} err="failed to get container status \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": rpc error: code = NotFound desc = could not find container \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": container with ID starting with 02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.147449 4949 scope.go:117] "RemoveContainer" containerID="d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.148558 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.152246 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7"} err="failed to get container status \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": rpc error: code = NotFound desc = could not find container \"d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7\": container with ID starting with d79cf5666dc686dc4be81889ae70fef237b8501a93e540b5b4e02ae83c1430d7 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.152507 4949 scope.go:117] "RemoveContainer" containerID="02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.154070 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94"} err="failed to get container status \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": rpc error: code = NotFound desc = could not find container \"02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94\": container with ID starting with 02b96ec0250a7db7b1d03d2c09c124053385918f3fe02899d97133625ba62e94 not found: ID does not exist" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.158278 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159090 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerName="nova-manage" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159196 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerName="nova-manage" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159282 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159351 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159437 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="init" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159507 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="init" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159633 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159706 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159787 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.159854 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" Jan 20 15:09:58 crc kubenswrapper[4949]: E0120 15:09:58.159933 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160011 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160367 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2f6e22-4c5a-4d30-95a8-0cacc9f21791" containerName="dnsmasq-dns" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160454 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" containerName="nova-scheduler-scheduler" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160555 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-api" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160645 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" containerName="nova-api-log" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.160729 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" containerName="nova-manage" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.162003 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.166653 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.167088 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.167344 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.172371 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.196548 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") pod \"88614638-70cb-4bcf-a017-bb7dbe17f962\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.196626 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") pod \"88614638-70cb-4bcf-a017-bb7dbe17f962\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.196759 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") pod \"88614638-70cb-4bcf-a017-bb7dbe17f962\" (UID: \"88614638-70cb-4bcf-a017-bb7dbe17f962\") " Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-public-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197109 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197135 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0174a61d-76ab-4198-91f1-d97291db561b-logs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197158 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197336 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-config-data\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.197360 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrv9l\" (UniqueName: \"kubernetes.io/projected/0174a61d-76ab-4198-91f1-d97291db561b-kube-api-access-wrv9l\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.202489 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt" (OuterVolumeSpecName: "kube-api-access-28qkt") pod "88614638-70cb-4bcf-a017-bb7dbe17f962" (UID: "88614638-70cb-4bcf-a017-bb7dbe17f962"). InnerVolumeSpecName "kube-api-access-28qkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.239271 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88614638-70cb-4bcf-a017-bb7dbe17f962" (UID: "88614638-70cb-4bcf-a017-bb7dbe17f962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.243685 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data" (OuterVolumeSpecName: "config-data") pod "88614638-70cb-4bcf-a017-bb7dbe17f962" (UID: "88614638-70cb-4bcf-a017-bb7dbe17f962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.298748 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-config-data\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299131 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrv9l\" (UniqueName: \"kubernetes.io/projected/0174a61d-76ab-4198-91f1-d97291db561b-kube-api-access-wrv9l\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-public-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299227 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299251 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0174a61d-76ab-4198-91f1-d97291db561b-logs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299466 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299483 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qkt\" (UniqueName: \"kubernetes.io/projected/88614638-70cb-4bcf-a017-bb7dbe17f962-kube-api-access-28qkt\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.299497 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88614638-70cb-4bcf-a017-bb7dbe17f962-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.300073 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0174a61d-76ab-4198-91f1-d97291db561b-logs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.302340 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-public-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.302448 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.302729 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-config-data\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.303804 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0174a61d-76ab-4198-91f1-d97291db561b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.317584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrv9l\" (UniqueName: \"kubernetes.io/projected/0174a61d-76ab-4198-91f1-d97291db561b-kube-api-access-wrv9l\") pod \"nova-api-0\" (UID: \"0174a61d-76ab-4198-91f1-d97291db561b\") " pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.490899 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.801780 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5c5235-abcc-4af7-b9ee-c9eacb8c2104" path="/var/lib/kubelet/pods/ff5c5235-abcc-4af7-b9ee-c9eacb8c2104/volumes" Jan 20 15:09:58 crc kubenswrapper[4949]: I0120 15:09:58.924684 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.099272 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"88614638-70cb-4bcf-a017-bb7dbe17f962","Type":"ContainerDied","Data":"5541649a6205e5a49e0ada9c0e4a8696df40e24c47784acaf792dbed4c578c7e"} Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.099329 4949 scope.go:117] "RemoveContainer" containerID="14fd084552f83c95b3b654d38113fc3617c7b752b9f603810e6a1f2726c8a67e" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.099376 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.104549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0174a61d-76ab-4198-91f1-d97291db561b","Type":"ContainerStarted","Data":"0f1a3cbee3a747e990ca06edb99c32d00380082af15dd8a2b11b3c13d5cf9118"} Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.128048 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.139094 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.149685 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.152187 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.155066 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.157435 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.217457 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.217507 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgwf\" (UniqueName: \"kubernetes.io/projected/51e2ed93-379c-457d-992a-57160c6be51a-kube-api-access-5wgwf\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.217660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-config-data\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.319477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-config-data\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.319930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.320055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgwf\" (UniqueName: \"kubernetes.io/projected/51e2ed93-379c-457d-992a-57160c6be51a-kube-api-access-5wgwf\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.322827 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-config-data\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.324074 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e2ed93-379c-457d-992a-57160c6be51a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.343974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgwf\" (UniqueName: \"kubernetes.io/projected/51e2ed93-379c-457d-992a-57160c6be51a-kube-api-access-5wgwf\") pod \"nova-scheduler-0\" (UID: \"51e2ed93-379c-457d-992a-57160c6be51a\") " pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.471021 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 20 15:09:59 crc kubenswrapper[4949]: I0120 15:09:59.942100 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.117308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51e2ed93-379c-457d-992a-57160c6be51a","Type":"ContainerStarted","Data":"1aa29939e06abe06cc2f332dd6c17e842c249284962c999b67299a44bc656bed"} Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.121065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0174a61d-76ab-4198-91f1-d97291db561b","Type":"ContainerStarted","Data":"80445cb9df2a5e3b1f4410a1f1a448edb39ca43b6b5334ff6ba2e31400d796fa"} Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.121093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0174a61d-76ab-4198-91f1-d97291db561b","Type":"ContainerStarted","Data":"b6255c293efec89db417e280102d10fafdec207ec9f08f9eb79c188890c11e4b"} Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.138576 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.138559049 podStartE2EDuration="2.138559049s" podCreationTimestamp="2026-01-20 15:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:00.13579788 +0000 UTC m=+1195.945628748" watchObservedRunningTime="2026-01-20 15:10:00.138559049 +0000 UTC m=+1195.948389907" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.430006 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:58568->10.217.0.182:8775: read: connection reset by peer" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.430030 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:58570->10.217.0.182:8775: read: connection reset by peer" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.804311 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88614638-70cb-4bcf-a017-bb7dbe17f962" path="/var/lib/kubelet/pods/88614638-70cb-4bcf-a017-bb7dbe17f962/volumes" Jan 20 15:10:00 crc kubenswrapper[4949]: I0120 15:10:00.970081 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.061603 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.061822 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.062752 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.063153 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.063235 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") pod \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\" (UID: \"42bf2757-50b8-4780-91b2-f0e4a62ea50c\") " Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.063957 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs" (OuterVolumeSpecName: "logs") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.064839 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42bf2757-50b8-4780-91b2-f0e4a62ea50c-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.086216 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4" (OuterVolumeSpecName: "kube-api-access-9bhx4") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "kube-api-access-9bhx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.098243 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data" (OuterVolumeSpecName: "config-data") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.101152 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.130667 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "42bf2757-50b8-4780-91b2-f0e4a62ea50c" (UID: "42bf2757-50b8-4780-91b2-f0e4a62ea50c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.132956 4949 generic.go:334] "Generic (PLEG): container finished" podID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" exitCode=0 Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.133087 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.133777 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerDied","Data":"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea"} Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.133855 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42bf2757-50b8-4780-91b2-f0e4a62ea50c","Type":"ContainerDied","Data":"b5f7d7790b27ef22d09958a0de70361e54da69dea5bf5665160ed80a276f0768"} Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.134090 4949 scope.go:117] "RemoveContainer" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.141552 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51e2ed93-379c-457d-992a-57160c6be51a","Type":"ContainerStarted","Data":"73c3e5f12252f0ae1df4019f7f31f3c6e8336b71ed42105acb202980708f0f29"} Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.165132 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.165106969 podStartE2EDuration="2.165106969s" podCreationTimestamp="2026-01-20 15:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:01.158767817 +0000 UTC m=+1196.968598675" watchObservedRunningTime="2026-01-20 15:10:01.165106969 +0000 UTC m=+1196.974937827" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.167563 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bhx4\" (UniqueName: \"kubernetes.io/projected/42bf2757-50b8-4780-91b2-f0e4a62ea50c-kube-api-access-9bhx4\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.177651 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.177670 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.177685 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42bf2757-50b8-4780-91b2-f0e4a62ea50c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.178381 4949 scope.go:117] "RemoveContainer" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.211608 4949 scope.go:117] "RemoveContainer" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.211755 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.213156 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea\": container with ID starting with 62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea not found: ID does not exist" containerID="62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.213186 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea"} err="failed to get container status \"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea\": rpc error: code = NotFound desc = could not find container \"62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea\": container with ID starting with 62ec3b1c6d0384766ca2fe3dc73b1f08e1f47f03663bc05795b2c03b8bcc0eea not found: ID does not exist" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.213206 4949 scope.go:117] "RemoveContainer" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.214073 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80\": container with ID starting with 3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80 not found: ID does not exist" containerID="3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.214093 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80"} err="failed to get container status \"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80\": rpc error: code = NotFound desc = could not find container \"3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80\": container with ID starting with 3f0249a76b34d5d7c1425ff6d0c065fce311e00d87634aea836eb3d45049ac80 not found: ID does not exist" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.221707 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.227468 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.228170 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228294 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" Jan 20 15:10:01 crc kubenswrapper[4949]: E0120 15:10:01.228379 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228452 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228762 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-log" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.228863 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" containerName="nova-metadata-metadata" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.241430 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.252289 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.255139 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.256053 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280073 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvppw\" (UniqueName: \"kubernetes.io/projected/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-kube-api-access-kvppw\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280137 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280208 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-logs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280257 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.280320 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-config-data\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381814 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvppw\" (UniqueName: \"kubernetes.io/projected/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-kube-api-access-kvppw\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381924 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-logs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381954 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.381996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-config-data\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.382664 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-logs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.386778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.386913 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-config-data\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.398211 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.399856 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvppw\" (UniqueName: \"kubernetes.io/projected/4185f7d0-b70a-4d49-82b9-e249bd1b2c48-kube-api-access-kvppw\") pod \"nova-metadata-0\" (UID: \"4185f7d0-b70a-4d49-82b9-e249bd1b2c48\") " pod="openstack/nova-metadata-0" Jan 20 15:10:01 crc kubenswrapper[4949]: I0120 15:10:01.578743 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 20 15:10:02 crc kubenswrapper[4949]: I0120 15:10:02.099277 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 20 15:10:02 crc kubenswrapper[4949]: I0120 15:10:02.152265 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4185f7d0-b70a-4d49-82b9-e249bd1b2c48","Type":"ContainerStarted","Data":"7b03474a083f4d7dd7543d9c695ae60e2cd7f68dd98a3023a29b3b030a6b212a"} Jan 20 15:10:02 crc kubenswrapper[4949]: I0120 15:10:02.799856 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42bf2757-50b8-4780-91b2-f0e4a62ea50c" path="/var/lib/kubelet/pods/42bf2757-50b8-4780-91b2-f0e4a62ea50c/volumes" Jan 20 15:10:03 crc kubenswrapper[4949]: I0120 15:10:03.164066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4185f7d0-b70a-4d49-82b9-e249bd1b2c48","Type":"ContainerStarted","Data":"36f442cdac5cb56104c4a45e5855152cc04b5dbb09628c44a5dcb3532627725e"} Jan 20 15:10:03 crc kubenswrapper[4949]: I0120 15:10:03.165170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4185f7d0-b70a-4d49-82b9-e249bd1b2c48","Type":"ContainerStarted","Data":"b26d835b8f44d8929467bab7247ccdee72a079bf467b424921e1b09387dd45bb"} Jan 20 15:10:03 crc kubenswrapper[4949]: I0120 15:10:03.189906 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.189887725 podStartE2EDuration="2.189887725s" podCreationTimestamp="2026-01-20 15:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:03.183459109 +0000 UTC m=+1198.993290017" watchObservedRunningTime="2026-01-20 15:10:03.189887725 +0000 UTC m=+1198.999718583" Jan 20 15:10:04 crc kubenswrapper[4949]: I0120 15:10:04.471225 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 20 15:10:06 crc kubenswrapper[4949]: I0120 15:10:06.579068 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:10:06 crc kubenswrapper[4949]: I0120 15:10:06.579693 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 20 15:10:08 crc kubenswrapper[4949]: I0120 15:10:08.491296 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:10:08 crc kubenswrapper[4949]: I0120 15:10:08.491640 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.472686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.504110 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.505979 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0174a61d-76ab-4198-91f1-d97291db561b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:09 crc kubenswrapper[4949]: I0120 15:10:09.505904 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0174a61d-76ab-4198-91f1-d97291db561b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:10 crc kubenswrapper[4949]: I0120 15:10:10.261330 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 20 15:10:11 crc kubenswrapper[4949]: I0120 15:10:11.579321 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:10:11 crc kubenswrapper[4949]: I0120 15:10:11.579720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 20 15:10:12 crc kubenswrapper[4949]: I0120 15:10:12.626883 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4185f7d0-b70a-4d49-82b9-e249bd1b2c48" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:12 crc kubenswrapper[4949]: I0120 15:10:12.626883 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4185f7d0-b70a-4d49-82b9-e249bd1b2c48" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.394786 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.506994 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.507789 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.513056 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 20 15:10:18 crc kubenswrapper[4949]: I0120 15:10:18.518599 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:10:19 crc kubenswrapper[4949]: I0120 15:10:19.318962 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 20 15:10:19 crc kubenswrapper[4949]: I0120 15:10:19.329048 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 20 15:10:21 crc kubenswrapper[4949]: I0120 15:10:21.588161 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:10:21 crc kubenswrapper[4949]: I0120 15:10:21.593986 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 20 15:10:21 crc kubenswrapper[4949]: I0120 15:10:21.603903 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:10:22 crc kubenswrapper[4949]: I0120 15:10:22.352811 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 20 15:10:30 crc kubenswrapper[4949]: I0120 15:10:30.992773 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:31 crc kubenswrapper[4949]: I0120 15:10:31.787119 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:35 crc kubenswrapper[4949]: I0120 15:10:35.111877 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" containerID="cri-o://4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" gracePeriod=604796 Jan 20 15:10:36 crc kubenswrapper[4949]: I0120 15:10:36.054144 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" containerID="cri-o://71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" gracePeriod=604796 Jan 20 15:10:37 crc kubenswrapper[4949]: I0120 15:10:37.159997 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 20 15:10:37 crc kubenswrapper[4949]: I0120 15:10:37.444189 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.683510 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850345 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850488 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850543 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850605 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850682 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850710 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850772 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850820 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850857 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.850907 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") pod \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\" (UID: \"cf4b5f65-52fe-4e8b-9d12-817e94e9b629\") " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.851775 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.852256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.853654 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.856468 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info" (OuterVolumeSpecName: "pod-info") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.857980 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct" (OuterVolumeSpecName: "kube-api-access-pr5ct") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "kube-api-access-pr5ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.861500 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.861606 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.878747 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.913811 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data" (OuterVolumeSpecName: "config-data") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.922703 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf" (OuterVolumeSpecName: "server-conf") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.952754 4949 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953024 4949 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953113 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953200 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953292 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953372 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953460 4949 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-server-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953592 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953673 4949 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-pod-info\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.953745 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr5ct\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-kube-api-access-pr5ct\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.975275 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 20 15:10:41 crc kubenswrapper[4949]: I0120 15:10:41.992633 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cf4b5f65-52fe-4e8b-9d12-817e94e9b629" (UID: "cf4b5f65-52fe-4e8b-9d12-817e94e9b629"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.055099 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.055137 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf4b5f65-52fe-4e8b-9d12-817e94e9b629-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122669 4949 generic.go:334] "Generic (PLEG): container finished" podID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" exitCode=0 Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerDied","Data":"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d"} Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122747 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf4b5f65-52fe-4e8b-9d12-817e94e9b629","Type":"ContainerDied","Data":"3ba62d6c38f112ac55fc459153392bef260b35932431e703432380fb98680b57"} Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122765 4949 scope.go:117] "RemoveContainer" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.122914 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.163884 4949 scope.go:117] "RemoveContainer" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.189501 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.199951 4949 scope.go:117] "RemoveContainer" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.200444 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d\": container with ID starting with 4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d not found: ID does not exist" containerID="4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.200481 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d"} err="failed to get container status \"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d\": rpc error: code = NotFound desc = could not find container \"4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d\": container with ID starting with 4bd39f77caae7a51919b9ad5ce9552e8e5703dc696101b60a417cca06776920d not found: ID does not exist" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.200507 4949 scope.go:117] "RemoveContainer" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.200832 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131\": container with ID starting with ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131 not found: ID does not exist" containerID="ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.200910 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131"} err="failed to get container status \"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131\": rpc error: code = NotFound desc = could not find container \"ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131\": container with ID starting with ca4a5fc7927fd69eed39a285899a8652f615fb5eeb59420f66d48325a2bd0131 not found: ID does not exist" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.224937 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240197 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.240725 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240745 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" Jan 20 15:10:42 crc kubenswrapper[4949]: E0120 15:10:42.240757 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="setup-container" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240762 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="setup-container" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.240921 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" containerName="rabbitmq" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.242989 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245113 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245381 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245620 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.245919 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.246276 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cpjq5" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.246469 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.251825 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.259870 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359730 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359767 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-config-data\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.359988 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360038 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmxj\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-kube-api-access-tlmxj\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360314 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360338 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.360370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.461764 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462039 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-config-data\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462056 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462084 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462113 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462151 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmxj\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-kube-api-access-tlmxj\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462195 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462217 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462231 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.462249 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463068 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-config-data\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463638 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463766 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.463828 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.466438 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.466966 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.469171 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.474748 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.482299 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.487376 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmxj\" (UniqueName: \"kubernetes.io/projected/18d74874-b8f5-4706-abfe-c8d1cb7bb21b-kube-api-access-tlmxj\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.508938 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"18d74874-b8f5-4706-abfe-c8d1cb7bb21b\") " pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.585345 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.594322 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775113 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775186 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775213 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775272 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775351 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775421 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775466 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775498 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775554 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775578 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.775652 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") pod \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\" (UID: \"f3c1f546-0796-457f-8b06-a5ffd11e1b36\") " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.778329 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.778786 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.781582 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.784510 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.784537 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8" (OuterVolumeSpecName: "kube-api-access-vkpc8") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "kube-api-access-vkpc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.785431 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.788816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.790281 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info" (OuterVolumeSpecName: "pod-info") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.798636 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data" (OuterVolumeSpecName: "config-data") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.819446 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4b5f65-52fe-4e8b-9d12-817e94e9b629" path="/var/lib/kubelet/pods/cf4b5f65-52fe-4e8b-9d12-817e94e9b629/volumes" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.837696 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf" (OuterVolumeSpecName: "server-conf") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877748 4949 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877811 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877826 4949 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3c1f546-0796-457f-8b06-a5ffd11e1b36-pod-info\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877851 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877891 4949 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3c1f546-0796-457f-8b06-a5ffd11e1b36-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877907 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877921 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkpc8\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-kube-api-access-vkpc8\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877935 4949 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-server-conf\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877973 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3c1f546-0796-457f-8b06-a5ffd11e1b36-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.877985 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.880683 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f3c1f546-0796-457f-8b06-a5ffd11e1b36" (UID: "f3c1f546-0796-457f-8b06-a5ffd11e1b36"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.904901 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.979882 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:42 crc kubenswrapper[4949]: I0120 15:10:42.979918 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3c1f546-0796-457f-8b06-a5ffd11e1b36-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.122672 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139000 4949 generic.go:334] "Generic (PLEG): container finished" podID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" exitCode=0 Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139042 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerDied","Data":"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281"} Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139068 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f3c1f546-0796-457f-8b06-a5ffd11e1b36","Type":"ContainerDied","Data":"554ea4585f02865d01f3bb368381beaf1c61c25feefa6a8443983240c2158e5a"} Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139086 4949 scope.go:117] "RemoveContainer" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.139189 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.180933 4949 scope.go:117] "RemoveContainer" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.191093 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.199137 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.223876 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.224348 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="setup-container" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.224363 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="setup-container" Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.224376 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.224384 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.224621 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" containerName="rabbitmq" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.225281 4949 scope.go:117] "RemoveContainer" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.225694 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.228149 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281\": container with ID starting with 71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281 not found: ID does not exist" containerID="71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.228198 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281"} err="failed to get container status \"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281\": rpc error: code = NotFound desc = could not find container \"71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281\": container with ID starting with 71b259a3a9d4dbcf730d494ae2918cdb9de5bc9f82a8f203910c356ff3142281 not found: ID does not exist" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.228226 4949 scope.go:117] "RemoveContainer" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" Jan 20 15:10:43 crc kubenswrapper[4949]: E0120 15:10:43.231778 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c\": container with ID starting with 7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c not found: ID does not exist" containerID="7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.231822 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c"} err="failed to get container status \"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c\": rpc error: code = NotFound desc = could not find container \"7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c\": container with ID starting with 7dd18e6359d5c8872773206d5dde74c0d8bb37f3ccb82404f814158f5c25c21c not found: ID does not exist" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.232315 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.232500 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.232949 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233187 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233440 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233684 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.233926 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2fdrl" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.243261 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.386948 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81813586-eebe-4c95-ad8b-433b8c501337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387400 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387484 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387604 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81813586-eebe-4c95-ad8b-433b8c501337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387655 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387695 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387757 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387854 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387911 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.387954 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8qf\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-kube-api-access-xt8qf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489113 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81813586-eebe-4c95-ad8b-433b8c501337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489140 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489202 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489245 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489273 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489297 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8qf\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-kube-api-access-xt8qf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489325 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81813586-eebe-4c95-ad8b-433b8c501337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.489549 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.490166 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.490879 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.490964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.491406 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.491643 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/81813586-eebe-4c95-ad8b-433b8c501337-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.494464 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.494777 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/81813586-eebe-4c95-ad8b-433b8c501337-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.494892 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/81813586-eebe-4c95-ad8b-433b8c501337-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.508578 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.513726 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8qf\" (UniqueName: \"kubernetes.io/projected/81813586-eebe-4c95-ad8b-433b8c501337-kube-api-access-xt8qf\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.539309 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"81813586-eebe-4c95-ad8b-433b8c501337\") " pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:43 crc kubenswrapper[4949]: I0120 15:10:43.624553 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.054922 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.156344 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerStarted","Data":"8f209c95db3b06f8092c3d65f9cf16cb8bbd63aec85074513798ef6de863457b"} Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.158878 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerStarted","Data":"fb7c1f66a1db9ea2f8ac89f67a24e86d02db7452cc7936bccad31e2d3c3fa80c"} Jan 20 15:10:44 crc kubenswrapper[4949]: I0120 15:10:44.812239 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c1f546-0796-457f-8b06-a5ffd11e1b36" path="/var/lib/kubelet/pods/f3c1f546-0796-457f-8b06-a5ffd11e1b36/volumes" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.177599 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerStarted","Data":"ca4e5cc776d6975afb5c4e9ba101a0fcc325a77ea724d14d99aa36042a872478"} Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.179171 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerStarted","Data":"e08fb61215a8a289efb99aefcdf14e611881ced7e01ea67a6b22da694eb3e81c"} Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.596147 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.597573 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.599743 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.612221 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749402 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749460 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749604 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.749628 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851017 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851348 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851482 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851537 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.851553 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.852250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.852295 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.853705 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.854906 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.857925 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.878470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"dnsmasq-dns-578b8d767c-6bjlw\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:46 crc kubenswrapper[4949]: I0120 15:10:46.975369 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:47 crc kubenswrapper[4949]: I0120 15:10:47.421561 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:10:47 crc kubenswrapper[4949]: W0120 15:10:47.425103 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4944a90_5076_4b63_8f86_749ad6555dbe.slice/crio-d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a WatchSource:0}: Error finding container d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a: Status 404 returned error can't find the container with id d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a Jan 20 15:10:48 crc kubenswrapper[4949]: I0120 15:10:48.200389 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" exitCode=0 Jan 20 15:10:48 crc kubenswrapper[4949]: I0120 15:10:48.200473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerDied","Data":"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009"} Jan 20 15:10:48 crc kubenswrapper[4949]: I0120 15:10:48.200884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerStarted","Data":"d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a"} Jan 20 15:10:49 crc kubenswrapper[4949]: I0120 15:10:49.213295 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerStarted","Data":"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86"} Jan 20 15:10:49 crc kubenswrapper[4949]: I0120 15:10:49.213601 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:49 crc kubenswrapper[4949]: I0120 15:10:49.245184 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" podStartSLOduration=3.245163247 podStartE2EDuration="3.245163247s" podCreationTimestamp="2026-01-20 15:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:49.233575816 +0000 UTC m=+1245.043406684" watchObservedRunningTime="2026-01-20 15:10:49.245163247 +0000 UTC m=+1245.054994115" Jan 20 15:10:56 crc kubenswrapper[4949]: I0120 15:10:56.976885 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.090004 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.090223 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" containerID="cri-o://3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107" gracePeriod=10 Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.294756 4949 generic.go:334] "Generic (PLEG): container finished" podID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerID="3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107" exitCode=0 Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.295008 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerDied","Data":"3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107"} Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.317493 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.318927 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.340700 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381636 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381856 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381885 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381932 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.381957 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483401 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483473 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483551 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483569 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483592 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.483609 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.484437 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.484602 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.484812 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.485041 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.485113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.504762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"dnsmasq-dns-fbc59fbb7-tm44w\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.570121 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.636288 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686358 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686456 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686539 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.686567 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") pod \"f0e49de8-75d6-4106-894c-b8b22ef6f279\" (UID: \"f0e49de8-75d6-4106-894c-b8b22ef6f279\") " Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.690264 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8" (OuterVolumeSpecName: "kube-api-access-wl5b8") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "kube-api-access-wl5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.748181 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config" (OuterVolumeSpecName: "config") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.760263 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.760940 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.767311 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0e49de8-75d6-4106-894c-b8b22ef6f279" (UID: "f0e49de8-75d6-4106-894c-b8b22ef6f279"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789683 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl5b8\" (UniqueName: \"kubernetes.io/projected/f0e49de8-75d6-4106-894c-b8b22ef6f279-kube-api-access-wl5b8\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789724 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789736 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789749 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:57 crc kubenswrapper[4949]: I0120 15:10:57.789757 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0e49de8-75d6-4106-894c-b8b22ef6f279-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:10:58 crc kubenswrapper[4949]: W0120 15:10:58.090406 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5fd960d_ae25_4d53_bf2e_c952c18f5c4e.slice/crio-28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92 WatchSource:0}: Error finding container 28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92: Status 404 returned error can't find the container with id 28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92 Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.090626 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.305592 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" event={"ID":"f0e49de8-75d6-4106-894c-b8b22ef6f279","Type":"ContainerDied","Data":"55f29ec4bac4ac4376fe3452c37cd668b9f8ffe67866fcc276100056a1141b3d"} Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.305867 4949 scope.go:117] "RemoveContainer" containerID="3e794d417408d49de2a87a1f6db8da05f11c9ed5e0673b14ced766ff3bffc107" Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.305978 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.309782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerStarted","Data":"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854"} Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.309830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerStarted","Data":"28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92"} Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.452280 4949 scope.go:117] "RemoveContainer" containerID="70b240f3fe0404274eea1d589f15c7d987d02877fd9ababf09b1f0ab34e25351" Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.483949 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.493356 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zznrk"] Jan 20 15:10:58 crc kubenswrapper[4949]: I0120 15:10:58.801201 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" path="/var/lib/kubelet/pods/f0e49de8-75d6-4106-894c-b8b22ef6f279/volumes" Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.323743 4949 generic.go:334] "Generic (PLEG): container finished" podID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" exitCode=0 Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.323816 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerDied","Data":"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854"} Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.324065 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.324075 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerStarted","Data":"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e"} Jan 20 15:10:59 crc kubenswrapper[4949]: I0120 15:10:59.370813 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" podStartSLOduration=2.370793812 podStartE2EDuration="2.370793812s" podCreationTimestamp="2026-01-20 15:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:59.338668466 +0000 UTC m=+1255.148499344" watchObservedRunningTime="2026-01-20 15:10:59.370793812 +0000 UTC m=+1255.180624670" Jan 20 15:11:02 crc kubenswrapper[4949]: I0120 15:11:02.366407 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68d4b6d797-zznrk" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Jan 20 15:11:07 crc kubenswrapper[4949]: I0120 15:11:07.638549 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:11:07 crc kubenswrapper[4949]: I0120 15:11:07.723122 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:11:07 crc kubenswrapper[4949]: I0120 15:11:07.723444 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" containerID="cri-o://8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" gracePeriod=10 Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.144468 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.194957 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195092 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195120 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195237 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195291 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.195343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") pod \"c4944a90-5076-4b63-8f86-749ad6555dbe\" (UID: \"c4944a90-5076-4b63-8f86-749ad6555dbe\") " Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.224049 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx" (OuterVolumeSpecName: "kube-api-access-kb5tx") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "kube-api-access-kb5tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.252005 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.259618 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.262086 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.266053 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.292914 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config" (OuterVolumeSpecName: "config") pod "c4944a90-5076-4b63-8f86-749ad6555dbe" (UID: "c4944a90-5076-4b63-8f86-749ad6555dbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298081 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298118 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298129 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb5tx\" (UniqueName: \"kubernetes.io/projected/c4944a90-5076-4b63-8f86-749ad6555dbe-kube-api-access-kb5tx\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298141 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298149 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.298158 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4944a90-5076-4b63-8f86-749ad6555dbe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417577 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" exitCode=0 Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417638 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerDied","Data":"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86"} Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417687 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" event={"ID":"c4944a90-5076-4b63-8f86-749ad6555dbe","Type":"ContainerDied","Data":"d7e0fb420a7a375dd1492683838a955a89ff83e151d71125efa7648355c3107a"} Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417682 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-6bjlw" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.417720 4949 scope.go:117] "RemoveContainer" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.453780 4949 scope.go:117] "RemoveContainer" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.461801 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.473158 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-6bjlw"] Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.481307 4949 scope.go:117] "RemoveContainer" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" Jan 20 15:11:08 crc kubenswrapper[4949]: E0120 15:11:08.481795 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86\": container with ID starting with 8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86 not found: ID does not exist" containerID="8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.481828 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86"} err="failed to get container status \"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86\": rpc error: code = NotFound desc = could not find container \"8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86\": container with ID starting with 8f18536314166c5634756fa841cdbe59d9895addff57bb2d133c8e9d64826e86 not found: ID does not exist" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.481852 4949 scope.go:117] "RemoveContainer" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" Jan 20 15:11:08 crc kubenswrapper[4949]: E0120 15:11:08.482066 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009\": container with ID starting with ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009 not found: ID does not exist" containerID="ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.482089 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009"} err="failed to get container status \"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009\": rpc error: code = NotFound desc = could not find container \"ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009\": container with ID starting with ca0d5de7c552cd8e6b7673c0ad47f047fc8fc12dd1b8bde1d4fac29f10e22009 not found: ID does not exist" Jan 20 15:11:08 crc kubenswrapper[4949]: I0120 15:11:08.799266 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" path="/var/lib/kubelet/pods/c4944a90-5076-4b63-8f86-749ad6555dbe/volumes" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.392222 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393354 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393375 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393406 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393417 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393437 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393449 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="init" Jan 20 15:11:13 crc kubenswrapper[4949]: E0120 15:11:13.393468 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393478 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393910 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e49de8-75d6-4106-894c-b8b22ef6f279" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.393944 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4944a90-5076-4b63-8f86-749ad6555dbe" containerName="dnsmasq-dns" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.394879 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.399677 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.399843 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.399885 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.400656 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.409640 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.496938 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.497091 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.497254 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.497411 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598326 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598421 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598464 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.598496 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.606877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.607197 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.614142 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.621324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:13 crc kubenswrapper[4949]: I0120 15:11:13.718766 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:14 crc kubenswrapper[4949]: I0120 15:11:14.283214 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:11:14 crc kubenswrapper[4949]: I0120 15:11:14.292907 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:11:14 crc kubenswrapper[4949]: I0120 15:11:14.477308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerStarted","Data":"c5990735c43e33ef1cd5b76d611b48d5bc2594cebc427562820c71779de689dd"} Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.514663 4949 generic.go:334] "Generic (PLEG): container finished" podID="81813586-eebe-4c95-ad8b-433b8c501337" containerID="ca4e5cc776d6975afb5c4e9ba101a0fcc325a77ea724d14d99aa36042a872478" exitCode=0 Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.514757 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerDied","Data":"ca4e5cc776d6975afb5c4e9ba101a0fcc325a77ea724d14d99aa36042a872478"} Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.518737 4949 generic.go:334] "Generic (PLEG): container finished" podID="18d74874-b8f5-4706-abfe-c8d1cb7bb21b" containerID="e08fb61215a8a289efb99aefcdf14e611881ced7e01ea67a6b22da694eb3e81c" exitCode=0 Jan 20 15:11:18 crc kubenswrapper[4949]: I0120 15:11:18.518825 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerDied","Data":"e08fb61215a8a289efb99aefcdf14e611881ced7e01ea67a6b22da694eb3e81c"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.578296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"81813586-eebe-4c95-ad8b-433b8c501337","Type":"ContainerStarted","Data":"73645e53e2d63ae130e7c318b3aba4707b845ef89697cee9c55e6d29581b4bab"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.580590 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18d74874-b8f5-4706-abfe-c8d1cb7bb21b","Type":"ContainerStarted","Data":"911b238238730f1c23ad133a7d88638fd9cc43d5cae357f9bfc96af368b0f4d5"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.580834 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.584213 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerStarted","Data":"f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3"} Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.603414 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.603395906 podStartE2EDuration="41.603395906s" podCreationTimestamp="2026-01-20 15:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:11:24.603007143 +0000 UTC m=+1280.412838011" watchObservedRunningTime="2026-01-20 15:11:24.603395906 +0000 UTC m=+1280.413226764" Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.627645 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" podStartSLOduration=2.529063213 podStartE2EDuration="11.627627411s" podCreationTimestamp="2026-01-20 15:11:13 +0000 UTC" firstStartedPulling="2026-01-20 15:11:14.292405298 +0000 UTC m=+1270.102236196" lastFinishedPulling="2026-01-20 15:11:23.390969536 +0000 UTC m=+1279.200800394" observedRunningTime="2026-01-20 15:11:24.62295407 +0000 UTC m=+1280.432784928" watchObservedRunningTime="2026-01-20 15:11:24.627627411 +0000 UTC m=+1280.437458269" Jan 20 15:11:24 crc kubenswrapper[4949]: I0120 15:11:24.662098 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.662081811 podStartE2EDuration="42.662081811s" podCreationTimestamp="2026-01-20 15:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:11:24.659490718 +0000 UTC m=+1280.469321596" watchObservedRunningTime="2026-01-20 15:11:24.662081811 +0000 UTC m=+1280.471912669" Jan 20 15:11:27 crc kubenswrapper[4949]: I0120 15:11:27.152383 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:11:27 crc kubenswrapper[4949]: I0120 15:11:27.153241 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:11:33 crc kubenswrapper[4949]: I0120 15:11:33.625928 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:11:33 crc kubenswrapper[4949]: I0120 15:11:33.629977 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 20 15:11:36 crc kubenswrapper[4949]: I0120 15:11:36.721667 4949 generic.go:334] "Generic (PLEG): container finished" podID="96f6253d-b990-4892-bd1f-9534caf70130" containerID="f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3" exitCode=0 Jan 20 15:11:36 crc kubenswrapper[4949]: I0120 15:11:36.721814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerDied","Data":"f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3"} Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.123177 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259131 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259456 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259574 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.259631 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") pod \"96f6253d-b990-4892-bd1f-9534caf70130\" (UID: \"96f6253d-b990-4892-bd1f-9534caf70130\") " Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.264196 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.266803 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf" (OuterVolumeSpecName: "kube-api-access-kzfcf") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "kube-api-access-kzfcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.286076 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.301818 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory" (OuterVolumeSpecName: "inventory") pod "96f6253d-b990-4892-bd1f-9534caf70130" (UID: "96f6253d-b990-4892-bd1f-9534caf70130"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363301 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363357 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfcf\" (UniqueName: \"kubernetes.io/projected/96f6253d-b990-4892-bd1f-9534caf70130-kube-api-access-kzfcf\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363379 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.363400 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96f6253d-b990-4892-bd1f-9534caf70130-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.767441 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" event={"ID":"96f6253d-b990-4892-bd1f-9534caf70130","Type":"ContainerDied","Data":"c5990735c43e33ef1cd5b76d611b48d5bc2594cebc427562820c71779de689dd"} Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.767498 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5990735c43e33ef1cd5b76d611b48d5bc2594cebc427562820c71779de689dd" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.767628 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.854916 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:11:38 crc kubenswrapper[4949]: E0120 15:11:38.855510 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f6253d-b990-4892-bd1f-9534caf70130" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.855568 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f6253d-b990-4892-bd1f-9534caf70130" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.855875 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f6253d-b990-4892-bd1f-9534caf70130" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.856832 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.859823 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.859893 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.860296 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.860473 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.867931 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.980936 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.981007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.981070 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:38 crc kubenswrapper[4949]: I0120 15:11:38.981227 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083303 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083441 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.083551 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.090608 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.091068 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.094271 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.104639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.180204 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:11:39 crc kubenswrapper[4949]: W0120 15:11:39.725731 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b69ef09_6dac_4ebb_b970_9c94553bea5a.slice/crio-47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b WatchSource:0}: Error finding container 47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b: Status 404 returned error can't find the container with id 47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.726715 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:11:39 crc kubenswrapper[4949]: I0120 15:11:39.776140 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerStarted","Data":"47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b"} Jan 20 15:11:40 crc kubenswrapper[4949]: I0120 15:11:40.785705 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerStarted","Data":"0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624"} Jan 20 15:11:40 crc kubenswrapper[4949]: I0120 15:11:40.810818 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" podStartSLOduration=2.267182357 podStartE2EDuration="2.810793696s" podCreationTimestamp="2026-01-20 15:11:38 +0000 UTC" firstStartedPulling="2026-01-20 15:11:39.731277944 +0000 UTC m=+1295.541108802" lastFinishedPulling="2026-01-20 15:11:40.274889233 +0000 UTC m=+1296.084720141" observedRunningTime="2026-01-20 15:11:40.800751056 +0000 UTC m=+1296.610581934" watchObservedRunningTime="2026-01-20 15:11:40.810793696 +0000 UTC m=+1296.620624564" Jan 20 15:11:42 crc kubenswrapper[4949]: I0120 15:11:42.588865 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 20 15:11:57 crc kubenswrapper[4949]: I0120 15:11:57.152406 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:11:57 crc kubenswrapper[4949]: I0120 15:11:57.152955 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:12:15 crc kubenswrapper[4949]: I0120 15:12:15.035759 4949 scope.go:117] "RemoveContainer" containerID="75eeb26a7f68d468851df9c835f2048a52f6a0810b6e958edad4bcb11c72b760" Jan 20 15:12:15 crc kubenswrapper[4949]: I0120 15:12:15.064059 4949 scope.go:117] "RemoveContainer" containerID="68276b2a29712da0c8b68150ac12b491bc8fd4c69ba0f9839e1490af457e18ac" Jan 20 15:12:15 crc kubenswrapper[4949]: I0120 15:12:15.153376 4949 scope.go:117] "RemoveContainer" containerID="e9c80696f38cbd4ba569f13cd01400c2307b69be4f65ed7b783d731d39600746" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.151945 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.152598 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.152652 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.153328 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:12:27 crc kubenswrapper[4949]: I0120 15:12:27.153391 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e" gracePeriod=600 Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.287608 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e" exitCode=0 Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.287720 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e"} Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.288308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a"} Jan 20 15:12:28 crc kubenswrapper[4949]: I0120 15:12:28.288338 4949 scope.go:117] "RemoveContainer" containerID="bc459cabba9af6fff1a73667740f267bab9c10d7afb545de052e7b20b79c6b1b" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.230325 4949 scope.go:117] "RemoveContainer" containerID="1c3e4aa1ea308f9c97aea7bb6cb6f532b81619e27a772434fe622f19cd656cfa" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.285403 4949 scope.go:117] "RemoveContainer" containerID="5f1fdd3c55be3dda53f44c4454f6a232b12073326f9707aae8372a9a4091a1ec" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.335310 4949 scope.go:117] "RemoveContainer" containerID="1bfde9055b8627100b5c93b232b289e018e33d5c7ac7bc51099c7c1742a2725c" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.368031 4949 scope.go:117] "RemoveContainer" containerID="a88c0c9a85129d9d6ee8562e849b80140bdaffa17c443b17a4de9fabf84ee113" Jan 20 15:13:15 crc kubenswrapper[4949]: I0120 15:13:15.412246 4949 scope.go:117] "RemoveContainer" containerID="7c2f2d8410be6184605e8ae8d978b47164b8a0cbb76d3f7c6288f8d1fc203aa8" Jan 20 15:14:27 crc kubenswrapper[4949]: I0120 15:14:27.151759 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:14:27 crc kubenswrapper[4949]: I0120 15:14:27.152304 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:14:53 crc kubenswrapper[4949]: I0120 15:14:53.168090 4949 generic.go:334] "Generic (PLEG): container finished" podID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerID="0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624" exitCode=0 Jan 20 15:14:53 crc kubenswrapper[4949]: I0120 15:14:53.168208 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerDied","Data":"0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624"} Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.626781 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781309 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781413 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781495 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.781609 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") pod \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\" (UID: \"3b69ef09-6dac-4ebb-b970-9c94553bea5a\") " Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.787550 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.788304 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q" (OuterVolumeSpecName: "kube-api-access-sqb8q") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "kube-api-access-sqb8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.815275 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.815408 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory" (OuterVolumeSpecName: "inventory") pod "3b69ef09-6dac-4ebb-b970-9c94553bea5a" (UID: "3b69ef09-6dac-4ebb-b970-9c94553bea5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883691 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883723 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883732 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b69ef09-6dac-4ebb-b970-9c94553bea5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:54 crc kubenswrapper[4949]: I0120 15:14:54.883742 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqb8q\" (UniqueName: \"kubernetes.io/projected/3b69ef09-6dac-4ebb-b970-9c94553bea5a-kube-api-access-sqb8q\") on node \"crc\" DevicePath \"\"" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.191738 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" event={"ID":"3b69ef09-6dac-4ebb-b970-9c94553bea5a","Type":"ContainerDied","Data":"47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b"} Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.191839 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47fe339641c6d995a1e8fb286aca4244b32d40766ec2eb422a3de6923dceeb2b" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.191869 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.314948 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:14:55 crc kubenswrapper[4949]: E0120 15:14:55.316408 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.316461 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.317799 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.318658 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.321362 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.321969 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.322419 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.326065 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.328674 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.501153 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.501969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.502037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.604379 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.604443 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.604580 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.610634 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.610655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.623525 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rx986\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:55 crc kubenswrapper[4949]: I0120 15:14:55.644660 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:14:56 crc kubenswrapper[4949]: I0120 15:14:56.198599 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:14:57 crc kubenswrapper[4949]: I0120 15:14:57.154903 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:14:57 crc kubenswrapper[4949]: I0120 15:14:57.155419 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:14:57 crc kubenswrapper[4949]: I0120 15:14:57.211990 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerStarted","Data":"79e3fd11ead5b5d6100c8c9e2f04259848a510f2e84a78bb74dcea8b8590c187"} Jan 20 15:14:58 crc kubenswrapper[4949]: I0120 15:14:58.223201 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerStarted","Data":"e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3"} Jan 20 15:14:58 crc kubenswrapper[4949]: I0120 15:14:58.242398 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" podStartSLOduration=1.7207840810000001 podStartE2EDuration="3.242357395s" podCreationTimestamp="2026-01-20 15:14:55 +0000 UTC" firstStartedPulling="2026-01-20 15:14:56.20219408 +0000 UTC m=+1492.012024938" lastFinishedPulling="2026-01-20 15:14:57.723767394 +0000 UTC m=+1493.533598252" observedRunningTime="2026-01-20 15:14:58.241832649 +0000 UTC m=+1494.051663537" watchObservedRunningTime="2026-01-20 15:14:58.242357395 +0000 UTC m=+1494.052188253" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.151156 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.153955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.157729 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.158153 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.162530 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.297708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.297869 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.297927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.399801 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.399890 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.400133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.400817 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.409324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.418903 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"collect-profiles-29482035-p8tzb\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.478441 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:00 crc kubenswrapper[4949]: I0120 15:15:00.970140 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 15:15:00 crc kubenswrapper[4949]: W0120 15:15:00.974983 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod574a1f73_b7b1_4ff1_9621_3c13ad507d66.slice/crio-e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387 WatchSource:0}: Error finding container e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387: Status 404 returned error can't find the container with id e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387 Jan 20 15:15:01 crc kubenswrapper[4949]: I0120 15:15:01.270975 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerStarted","Data":"1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b"} Jan 20 15:15:01 crc kubenswrapper[4949]: I0120 15:15:01.271031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerStarted","Data":"e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387"} Jan 20 15:15:01 crc kubenswrapper[4949]: I0120 15:15:01.295615 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" podStartSLOduration=1.295596817 podStartE2EDuration="1.295596817s" podCreationTimestamp="2026-01-20 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:15:01.288960671 +0000 UTC m=+1497.098791529" watchObservedRunningTime="2026-01-20 15:15:01.295596817 +0000 UTC m=+1497.105427675" Jan 20 15:15:02 crc kubenswrapper[4949]: I0120 15:15:02.288220 4949 generic.go:334] "Generic (PLEG): container finished" podID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerID="1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b" exitCode=0 Jan 20 15:15:02 crc kubenswrapper[4949]: I0120 15:15:02.288332 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerDied","Data":"1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b"} Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.645835 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.765899 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") pod \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.766111 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") pod \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.766153 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") pod \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\" (UID: \"574a1f73-b7b1-4ff1-9621-3c13ad507d66\") " Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.766827 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume" (OuterVolumeSpecName: "config-volume") pod "574a1f73-b7b1-4ff1-9621-3c13ad507d66" (UID: "574a1f73-b7b1-4ff1-9621-3c13ad507d66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.772605 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn" (OuterVolumeSpecName: "kube-api-access-sfkwn") pod "574a1f73-b7b1-4ff1-9621-3c13ad507d66" (UID: "574a1f73-b7b1-4ff1-9621-3c13ad507d66"). InnerVolumeSpecName "kube-api-access-sfkwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.773490 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "574a1f73-b7b1-4ff1-9621-3c13ad507d66" (UID: "574a1f73-b7b1-4ff1-9621-3c13ad507d66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.867782 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/574a1f73-b7b1-4ff1-9621-3c13ad507d66-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.867815 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/574a1f73-b7b1-4ff1-9621-3c13ad507d66-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:03 crc kubenswrapper[4949]: I0120 15:15:03.867825 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfkwn\" (UniqueName: \"kubernetes.io/projected/574a1f73-b7b1-4ff1-9621-3c13ad507d66-kube-api-access-sfkwn\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:04 crc kubenswrapper[4949]: I0120 15:15:04.308928 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" event={"ID":"574a1f73-b7b1-4ff1-9621-3c13ad507d66","Type":"ContainerDied","Data":"e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387"} Jan 20 15:15:04 crc kubenswrapper[4949]: I0120 15:15:04.308970 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e677ac4fc82b4832db058184e80ef21a3ab098a30ba8a78872331e855d2b7387" Jan 20 15:15:04 crc kubenswrapper[4949]: I0120 15:15:04.309034 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.543751 4949 scope.go:117] "RemoveContainer" containerID="044656bf31d86a0a8c627bf29b80249713be535a26f0f8f11b509ab2e81831f7" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.569694 4949 scope.go:117] "RemoveContainer" containerID="d669097a683794b317b66d7fe10e3ab8ca417443354ae6c43068cdeac2abef32" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.586941 4949 scope.go:117] "RemoveContainer" containerID="92f4da25c4af741167e010753c058bf0adddb09092c54b11c31878dc174330e5" Jan 20 15:15:15 crc kubenswrapper[4949]: I0120 15:15:15.626019 4949 scope.go:117] "RemoveContainer" containerID="b9e7253362065575b97f2ce8215072002f755dd1b51aa51ada8298fea676a78f" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.430021 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:26 crc kubenswrapper[4949]: E0120 15:15:26.431157 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerName="collect-profiles" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.431175 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerName="collect-profiles" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.431374 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" containerName="collect-profiles" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.432914 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.452266 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.525200 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.525259 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.525311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627378 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627547 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627582 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.627984 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.651478 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"community-operators-zgqwk\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:26 crc kubenswrapper[4949]: I0120 15:15:26.755271 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.152790 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.153185 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.153233 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.153965 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.154049 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" gracePeriod=600 Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.283918 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:27 crc kubenswrapper[4949]: E0120 15:15:27.289066 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.536374 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" exitCode=0 Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.536564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a"} Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.536995 4949 scope.go:117] "RemoveContainer" containerID="1d51ab299d273fe84d76c0c0f26419c164cac7661929f3c29031ae0e7812825e" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.537987 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:15:27 crc kubenswrapper[4949]: E0120 15:15:27.538269 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.539887 4949 generic.go:334] "Generic (PLEG): container finished" podID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" exitCode=0 Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.539929 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c"} Jan 20 15:15:27 crc kubenswrapper[4949]: I0120 15:15:27.540050 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerStarted","Data":"ae40b37149370d760ec04644c7fafc2f2f8f38ac0934b5aee3c6cf9cc4197563"} Jan 20 15:15:29 crc kubenswrapper[4949]: I0120 15:15:29.565169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerStarted","Data":"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23"} Jan 20 15:15:30 crc kubenswrapper[4949]: I0120 15:15:30.579404 4949 generic.go:334] "Generic (PLEG): container finished" podID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" exitCode=0 Jan 20 15:15:30 crc kubenswrapper[4949]: I0120 15:15:30.579456 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23"} Jan 20 15:15:33 crc kubenswrapper[4949]: I0120 15:15:33.606336 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerStarted","Data":"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98"} Jan 20 15:15:33 crc kubenswrapper[4949]: I0120 15:15:33.628195 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zgqwk" podStartSLOduration=2.334021055 podStartE2EDuration="7.628165597s" podCreationTimestamp="2026-01-20 15:15:26 +0000 UTC" firstStartedPulling="2026-01-20 15:15:27.542442308 +0000 UTC m=+1523.352273166" lastFinishedPulling="2026-01-20 15:15:32.83658684 +0000 UTC m=+1528.646417708" observedRunningTime="2026-01-20 15:15:33.624681459 +0000 UTC m=+1529.434512317" watchObservedRunningTime="2026-01-20 15:15:33.628165597 +0000 UTC m=+1529.437996505" Jan 20 15:15:36 crc kubenswrapper[4949]: I0120 15:15:36.755621 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:36 crc kubenswrapper[4949]: I0120 15:15:36.756434 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:36 crc kubenswrapper[4949]: I0120 15:15:36.822810 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:38 crc kubenswrapper[4949]: I0120 15:15:38.789248 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:15:38 crc kubenswrapper[4949]: E0120 15:15:38.789679 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:46 crc kubenswrapper[4949]: I0120 15:15:46.811244 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:46 crc kubenswrapper[4949]: I0120 15:15:46.875149 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:47 crc kubenswrapper[4949]: I0120 15:15:47.740723 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zgqwk" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" containerID="cri-o://1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" gracePeriod=2 Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.205427 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.328056 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") pod \"8308fb38-8369-4477-8b02-8ac8f53247ac\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.328440 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") pod \"8308fb38-8369-4477-8b02-8ac8f53247ac\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.328545 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") pod \"8308fb38-8369-4477-8b02-8ac8f53247ac\" (UID: \"8308fb38-8369-4477-8b02-8ac8f53247ac\") " Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.331017 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities" (OuterVolumeSpecName: "utilities") pod "8308fb38-8369-4477-8b02-8ac8f53247ac" (UID: "8308fb38-8369-4477-8b02-8ac8f53247ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.350342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89" (OuterVolumeSpecName: "kube-api-access-q9k89") pod "8308fb38-8369-4477-8b02-8ac8f53247ac" (UID: "8308fb38-8369-4477-8b02-8ac8f53247ac"). InnerVolumeSpecName "kube-api-access-q9k89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.399851 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8308fb38-8369-4477-8b02-8ac8f53247ac" (UID: "8308fb38-8369-4477-8b02-8ac8f53247ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.432125 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9k89\" (UniqueName: \"kubernetes.io/projected/8308fb38-8369-4477-8b02-8ac8f53247ac-kube-api-access-q9k89\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.432179 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.432201 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8308fb38-8369-4477-8b02-8ac8f53247ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752110 4949 generic.go:334] "Generic (PLEG): container finished" podID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" exitCode=0 Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752183 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98"} Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752234 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgqwk" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgqwk" event={"ID":"8308fb38-8369-4477-8b02-8ac8f53247ac","Type":"ContainerDied","Data":"ae40b37149370d760ec04644c7fafc2f2f8f38ac0934b5aee3c6cf9cc4197563"} Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.752268 4949 scope.go:117] "RemoveContainer" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.794126 4949 scope.go:117] "RemoveContainer" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.801184 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.801218 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zgqwk"] Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.814329 4949 scope.go:117] "RemoveContainer" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.862239 4949 scope.go:117] "RemoveContainer" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" Jan 20 15:15:48 crc kubenswrapper[4949]: E0120 15:15:48.862929 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98\": container with ID starting with 1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98 not found: ID does not exist" containerID="1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863117 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98"} err="failed to get container status \"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98\": rpc error: code = NotFound desc = could not find container \"1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98\": container with ID starting with 1b323f8e5a06b9dbd6b534b369753533724b291bace9dc4874df337665998e98 not found: ID does not exist" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863239 4949 scope.go:117] "RemoveContainer" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" Jan 20 15:15:48 crc kubenswrapper[4949]: E0120 15:15:48.863820 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23\": container with ID starting with e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23 not found: ID does not exist" containerID="e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863871 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23"} err="failed to get container status \"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23\": rpc error: code = NotFound desc = could not find container \"e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23\": container with ID starting with e7fe821abc76d86b2f62779cbd5c9fb1c2331cb95d87fc42d6fcaff1c4ef7d23 not found: ID does not exist" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.863910 4949 scope.go:117] "RemoveContainer" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" Jan 20 15:15:48 crc kubenswrapper[4949]: E0120 15:15:48.864252 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c\": container with ID starting with b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c not found: ID does not exist" containerID="b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c" Jan 20 15:15:48 crc kubenswrapper[4949]: I0120 15:15:48.864336 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c"} err="failed to get container status \"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c\": rpc error: code = NotFound desc = could not find container \"b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c\": container with ID starting with b3d165aff16df63349450a8bb49b6bff869d98893f3d9fab99d5decf3901ea3c not found: ID does not exist" Jan 20 15:15:49 crc kubenswrapper[4949]: I0120 15:15:49.789544 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:15:49 crc kubenswrapper[4949]: E0120 15:15:49.790211 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:15:50 crc kubenswrapper[4949]: I0120 15:15:50.804264 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" path="/var/lib/kubelet/pods/8308fb38-8369-4477-8b02-8ac8f53247ac/volumes" Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.064349 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.082970 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.104575 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.113937 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6dd3-account-create-update-k72x5"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.123575 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-x9bkl"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.132730 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zr22v"] Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.801238 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cffaea4-923f-446d-9df7-7c35332af89d" path="/var/lib/kubelet/pods/2cffaea4-923f-446d-9df7-7c35332af89d/volumes" Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.802284 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d427b9-3122-480c-8b2a-3862cdd2b3e2" path="/var/lib/kubelet/pods/81d427b9-3122-480c-8b2a-3862cdd2b3e2/volumes" Jan 20 15:16:02 crc kubenswrapper[4949]: I0120 15:16:02.803009 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3acdd4-7817-4358-8afb-90399e3fa23f" path="/var/lib/kubelet/pods/fa3acdd4-7817-4358-8afb-90399e3fa23f/volumes" Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.033343 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.040027 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.046458 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ctk5g"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.053810 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.064320 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-68d2-account-create-update-7xhv6"] Jan 20 15:16:03 crc kubenswrapper[4949]: I0120 15:16:03.071167 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-31fc-account-create-update-cvjjl"] Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.794720 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:04 crc kubenswrapper[4949]: E0120 15:16:04.795953 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.799016 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f223041-d962-43d8-81ad-0480ed09ff57" path="/var/lib/kubelet/pods/5f223041-d962-43d8-81ad-0480ed09ff57/volumes" Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.799580 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="625a0372-8b33-45fa-ad97-ad8e362be0fb" path="/var/lib/kubelet/pods/625a0372-8b33-45fa-ad97-ad8e362be0fb/volumes" Jan 20 15:16:04 crc kubenswrapper[4949]: I0120 15:16:04.800240 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2993cec-87be-40ef-8f45-51ad7072f115" path="/var/lib/kubelet/pods/e2993cec-87be-40ef-8f45-51ad7072f115/volumes" Jan 20 15:16:11 crc kubenswrapper[4949]: I0120 15:16:11.985102 4949 generic.go:334] "Generic (PLEG): container finished" podID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerID="e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3" exitCode=0 Jan 20 15:16:11 crc kubenswrapper[4949]: I0120 15:16:11.985222 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerDied","Data":"e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3"} Jan 20 15:16:12 crc kubenswrapper[4949]: I0120 15:16:12.032964 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:16:12 crc kubenswrapper[4949]: I0120 15:16:12.043668 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btxws"] Jan 20 15:16:12 crc kubenswrapper[4949]: I0120 15:16:12.800031 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafb93d7-a006-4cd2-99bd-e21022a5078f" path="/var/lib/kubelet/pods/cafb93d7-a006-4cd2-99bd-e21022a5078f/volumes" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.444599 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.502253 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") pod \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.502371 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") pod \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.503202 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") pod \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\" (UID: \"f8d847d1-1215-4c1c-9741-fb2dcf39e42d\") " Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.508819 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g" (OuterVolumeSpecName: "kube-api-access-hmx4g") pod "f8d847d1-1215-4c1c-9741-fb2dcf39e42d" (UID: "f8d847d1-1215-4c1c-9741-fb2dcf39e42d"). InnerVolumeSpecName "kube-api-access-hmx4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.541628 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8d847d1-1215-4c1c-9741-fb2dcf39e42d" (UID: "f8d847d1-1215-4c1c-9741-fb2dcf39e42d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.557711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory" (OuterVolumeSpecName: "inventory") pod "f8d847d1-1215-4c1c-9741-fb2dcf39e42d" (UID: "f8d847d1-1215-4c1c-9741-fb2dcf39e42d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.605804 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmx4g\" (UniqueName: \"kubernetes.io/projected/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-kube-api-access-hmx4g\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.605853 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:13 crc kubenswrapper[4949]: I0120 15:16:13.605872 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8d847d1-1215-4c1c-9741-fb2dcf39e42d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.011117 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" event={"ID":"f8d847d1-1215-4c1c-9741-fb2dcf39e42d","Type":"ContainerDied","Data":"79e3fd11ead5b5d6100c8c9e2f04259848a510f2e84a78bb74dcea8b8590c187"} Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.011172 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e3fd11ead5b5d6100c8c9e2f04259848a510f2e84a78bb74dcea8b8590c187" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.011244 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133033 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133578 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-content" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133607 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-content" Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133658 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133670 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133689 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133703 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: E0120 15:16:14.133745 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-utilities" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.133755 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="extract-utilities" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.134048 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8308fb38-8369-4477-8b02-8ac8f53247ac" containerName="registry-server" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.134085 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.134966 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.136794 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.137763 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.140724 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.148431 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.149906 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.214416 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.214528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.214788 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.316550 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.316647 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.316750 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.321287 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.322488 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.338840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-cfft5\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:14 crc kubenswrapper[4949]: I0120 15:16:14.455000 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.024707 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.030863 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.692305 4949 scope.go:117] "RemoveContainer" containerID="29003c0194acb9afdeb9e8174b3f33c4656b98673fb67369661844d652a26c45" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.790774 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:15 crc kubenswrapper[4949]: E0120 15:16:15.791122 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.807746 4949 scope.go:117] "RemoveContainer" containerID="55c6563e40c843e59be4fafc63ead58bf30f2492a5f98973bbb68f0d2d05885c" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.838606 4949 scope.go:117] "RemoveContainer" containerID="094b6628ac46a9618593f47c854c4d7a9d9b69f90d2558abc891d2b0e99aaaf8" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.895809 4949 scope.go:117] "RemoveContainer" containerID="60cec251d342b33f2835307267fafe842a90ee5c67ed1111d71404e9b0f935b9" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.919055 4949 scope.go:117] "RemoveContainer" containerID="dffa1d52cc18bc6e9f06ff6a01edc5037b2d08f37abf9a308e6bc44c3c94e753" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.941193 4949 scope.go:117] "RemoveContainer" containerID="164c3dcadf95a92cfbbf8afe3651c7b6ec563c58436faac06ef963587b8a851b" Jan 20 15:16:15 crc kubenswrapper[4949]: I0120 15:16:15.963412 4949 scope.go:117] "RemoveContainer" containerID="182fc5d23cfc8772155fb0ae18fcbb7d700abd47011cd0c4eae8e341dd49f364" Jan 20 15:16:16 crc kubenswrapper[4949]: I0120 15:16:16.038618 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerStarted","Data":"97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684"} Jan 20 15:16:16 crc kubenswrapper[4949]: I0120 15:16:16.038670 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerStarted","Data":"ca9096f49751c41d723589cdb971593c94a53895a459532c0d6cb2b394d39b3e"} Jan 20 15:16:21 crc kubenswrapper[4949]: I0120 15:16:21.093453 4949 generic.go:334] "Generic (PLEG): container finished" podID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerID="97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684" exitCode=0 Jan 20 15:16:21 crc kubenswrapper[4949]: I0120 15:16:21.093564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerDied","Data":"97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684"} Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.521825 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.563319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") pod \"3af1d203-d1de-4e8b-95cb-7977a46b0042\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.563764 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") pod \"3af1d203-d1de-4e8b-95cb-7977a46b0042\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.563844 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") pod \"3af1d203-d1de-4e8b-95cb-7977a46b0042\" (UID: \"3af1d203-d1de-4e8b-95cb-7977a46b0042\") " Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.569530 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt" (OuterVolumeSpecName: "kube-api-access-7s4lt") pod "3af1d203-d1de-4e8b-95cb-7977a46b0042" (UID: "3af1d203-d1de-4e8b-95cb-7977a46b0042"). InnerVolumeSpecName "kube-api-access-7s4lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.598837 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3af1d203-d1de-4e8b-95cb-7977a46b0042" (UID: "3af1d203-d1de-4e8b-95cb-7977a46b0042"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.618084 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory" (OuterVolumeSpecName: "inventory") pod "3af1d203-d1de-4e8b-95cb-7977a46b0042" (UID: "3af1d203-d1de-4e8b-95cb-7977a46b0042"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.665681 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s4lt\" (UniqueName: \"kubernetes.io/projected/3af1d203-d1de-4e8b-95cb-7977a46b0042-kube-api-access-7s4lt\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.665724 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:22 crc kubenswrapper[4949]: I0120 15:16:22.665736 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3af1d203-d1de-4e8b-95cb-7977a46b0042-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.111591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" event={"ID":"3af1d203-d1de-4e8b-95cb-7977a46b0042","Type":"ContainerDied","Data":"ca9096f49751c41d723589cdb971593c94a53895a459532c0d6cb2b394d39b3e"} Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.111646 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca9096f49751c41d723589cdb971593c94a53895a459532c0d6cb2b394d39b3e" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.111670 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.198564 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:16:23 crc kubenswrapper[4949]: E0120 15:16:23.198943 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.198963 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.199151 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.199723 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.201665 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.202394 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.202505 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.202958 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.240920 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.275623 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.275713 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.275761 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.377465 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.377590 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.377651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.381636 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.382317 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.394510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-5h8ll\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:23 crc kubenswrapper[4949]: I0120 15:16:23.520245 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:16:24 crc kubenswrapper[4949]: I0120 15:16:24.070000 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:16:24 crc kubenswrapper[4949]: I0120 15:16:24.121837 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerStarted","Data":"cdcffd9d7b2827b806fff56d34240ae3f953595cc37d090193b0f2cf2b2417fe"} Jan 20 15:16:25 crc kubenswrapper[4949]: I0120 15:16:25.132762 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerStarted","Data":"c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3"} Jan 20 15:16:25 crc kubenswrapper[4949]: I0120 15:16:25.154268 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" podStartSLOduration=1.5372355519999998 podStartE2EDuration="2.154246194s" podCreationTimestamp="2026-01-20 15:16:23 +0000 UTC" firstStartedPulling="2026-01-20 15:16:24.082248032 +0000 UTC m=+1579.892078910" lastFinishedPulling="2026-01-20 15:16:24.699258694 +0000 UTC m=+1580.509089552" observedRunningTime="2026-01-20 15:16:25.148212364 +0000 UTC m=+1580.958043222" watchObservedRunningTime="2026-01-20 15:16:25.154246194 +0000 UTC m=+1580.964077052" Jan 20 15:16:26 crc kubenswrapper[4949]: I0120 15:16:26.789785 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:26 crc kubenswrapper[4949]: E0120 15:16:26.790647 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.051659 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.066840 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.076879 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.086099 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.092979 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.099956 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.106510 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7a79-account-create-update-zrwtk"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.112864 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4b86v"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.119610 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zd7sx"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.127208 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f894-account-create-update-zl66h"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.133902 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9aa3-account-create-update-jbv24"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.140462 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tdr7p"] Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.806183 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2114c9bc-9691-4d96-8541-28ec5473428a" path="/var/lib/kubelet/pods/2114c9bc-9691-4d96-8541-28ec5473428a/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.807503 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbefde7-e737-4f29-9093-afc47f438c4c" path="/var/lib/kubelet/pods/6cbefde7-e737-4f29-9093-afc47f438c4c/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.808771 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900c89f3-a834-4a95-88cf-b6fda3fc9c58" path="/var/lib/kubelet/pods/900c89f3-a834-4a95-88cf-b6fda3fc9c58/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.809853 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57b3d7e-755f-43d2-aab3-f6d68a062a37" path="/var/lib/kubelet/pods/b57b3d7e-755f-43d2-aab3-f6d68a062a37/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.811918 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b36c38-4cb3-43d1-ade8-a1e554264870" path="/var/lib/kubelet/pods/c5b36c38-4cb3-43d1-ade8-a1e554264870/volumes" Jan 20 15:16:32 crc kubenswrapper[4949]: I0120 15:16:32.812927 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0efbc8-5060-4336-85af-23b901dd02fe" path="/var/lib/kubelet/pods/de0efbc8-5060-4336-85af-23b901dd02fe/volumes" Jan 20 15:16:35 crc kubenswrapper[4949]: I0120 15:16:35.022991 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:16:35 crc kubenswrapper[4949]: I0120 15:16:35.029397 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-48l6g"] Jan 20 15:16:36 crc kubenswrapper[4949]: I0120 15:16:36.802452 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c607bb7c-569c-4da2-b6bf-5b6c9b5c041e" path="/var/lib/kubelet/pods/c607bb7c-569c-4da2-b6bf-5b6c9b5c041e/volumes" Jan 20 15:16:38 crc kubenswrapper[4949]: I0120 15:16:38.790770 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:38 crc kubenswrapper[4949]: E0120 15:16:38.791558 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:16:44 crc kubenswrapper[4949]: I0120 15:16:44.029843 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:16:44 crc kubenswrapper[4949]: I0120 15:16:44.044700 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kp4rp"] Jan 20 15:16:44 crc kubenswrapper[4949]: I0120 15:16:44.807700 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e8050e-32dc-4014-9bc7-cd06d127eb38" path="/var/lib/kubelet/pods/a8e8050e-32dc-4014-9bc7-cd06d127eb38/volumes" Jan 20 15:16:53 crc kubenswrapper[4949]: I0120 15:16:53.789145 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:16:53 crc kubenswrapper[4949]: E0120 15:16:53.789944 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.349064 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.352369 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.369327 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.506483 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.506580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.506658 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.608477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.608626 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.608693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.609327 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.609481 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.630585 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"certified-operators-vwf8t\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:02 crc kubenswrapper[4949]: I0120 15:17:02.691053 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.225077 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.485796 4949 generic.go:334] "Generic (PLEG): container finished" podID="5fd12601-236e-4205-a994-2202832cf5a2" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" exitCode=0 Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.486215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90"} Jan 20 15:17:03 crc kubenswrapper[4949]: I0120 15:17:03.486266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerStarted","Data":"c5c43880117653d176e1498628fccde90a84869ad2ea1f0556fe05553c3d89b2"} Jan 20 15:17:04 crc kubenswrapper[4949]: I0120 15:17:04.799983 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:04 crc kubenswrapper[4949]: E0120 15:17:04.801126 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:05 crc kubenswrapper[4949]: I0120 15:17:05.508509 4949 generic.go:334] "Generic (PLEG): container finished" podID="5fd12601-236e-4205-a994-2202832cf5a2" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" exitCode=0 Jan 20 15:17:05 crc kubenswrapper[4949]: I0120 15:17:05.508576 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885"} Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.524155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerStarted","Data":"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4"} Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.526602 4949 generic.go:334] "Generic (PLEG): container finished" podID="9b62cf27-c244-466f-bddd-129a1a3db687" containerID="c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3" exitCode=0 Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.526662 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerDied","Data":"c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3"} Jan 20 15:17:06 crc kubenswrapper[4949]: I0120 15:17:06.547370 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwf8t" podStartSLOduration=1.747856418 podStartE2EDuration="4.547351463s" podCreationTimestamp="2026-01-20 15:17:02 +0000 UTC" firstStartedPulling="2026-01-20 15:17:03.487152302 +0000 UTC m=+1619.296983160" lastFinishedPulling="2026-01-20 15:17:06.286647337 +0000 UTC m=+1622.096478205" observedRunningTime="2026-01-20 15:17:06.544640226 +0000 UTC m=+1622.354471084" watchObservedRunningTime="2026-01-20 15:17:06.547351463 +0000 UTC m=+1622.357182321" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.412264 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.528985 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") pod \"9b62cf27-c244-466f-bddd-129a1a3db687\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.529066 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") pod \"9b62cf27-c244-466f-bddd-129a1a3db687\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.529145 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") pod \"9b62cf27-c244-466f-bddd-129a1a3db687\" (UID: \"9b62cf27-c244-466f-bddd-129a1a3db687\") " Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.535142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5" (OuterVolumeSpecName: "kube-api-access-f9mj5") pod "9b62cf27-c244-466f-bddd-129a1a3db687" (UID: "9b62cf27-c244-466f-bddd-129a1a3db687"). InnerVolumeSpecName "kube-api-access-f9mj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.543669 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" event={"ID":"9b62cf27-c244-466f-bddd-129a1a3db687","Type":"ContainerDied","Data":"cdcffd9d7b2827b806fff56d34240ae3f953595cc37d090193b0f2cf2b2417fe"} Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.543708 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcffd9d7b2827b806fff56d34240ae3f953595cc37d090193b0f2cf2b2417fe" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.543758 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.562186 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory" (OuterVolumeSpecName: "inventory") pod "9b62cf27-c244-466f-bddd-129a1a3db687" (UID: "9b62cf27-c244-466f-bddd-129a1a3db687"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.564480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b62cf27-c244-466f-bddd-129a1a3db687" (UID: "9b62cf27-c244-466f-bddd-129a1a3db687"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.634739 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.634788 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b62cf27-c244-466f-bddd-129a1a3db687-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.634802 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9mj5\" (UniqueName: \"kubernetes.io/projected/9b62cf27-c244-466f-bddd-129a1a3db687-kube-api-access-f9mj5\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.638337 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:17:08 crc kubenswrapper[4949]: E0120 15:17:08.638820 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.638840 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.639074 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.640712 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.650279 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.736265 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.736435 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.736671 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.838554 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.838661 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.838804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.842369 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.843929 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.856721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:08 crc kubenswrapper[4949]: I0120 15:17:08.968714 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:09 crc kubenswrapper[4949]: I0120 15:17:09.344778 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:17:09 crc kubenswrapper[4949]: W0120 15:17:09.352833 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod949e48ac_89ca_4f38_886e_fd951c7d7217.slice/crio-873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9 WatchSource:0}: Error finding container 873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9: Status 404 returned error can't find the container with id 873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9 Jan 20 15:17:09 crc kubenswrapper[4949]: I0120 15:17:09.551207 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerStarted","Data":"873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9"} Jan 20 15:17:10 crc kubenswrapper[4949]: I0120 15:17:10.565667 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerStarted","Data":"1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6"} Jan 20 15:17:10 crc kubenswrapper[4949]: I0120 15:17:10.593418 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" podStartSLOduration=2.063735413 podStartE2EDuration="2.593399932s" podCreationTimestamp="2026-01-20 15:17:08 +0000 UTC" firstStartedPulling="2026-01-20 15:17:09.355954354 +0000 UTC m=+1625.165785212" lastFinishedPulling="2026-01-20 15:17:09.885618843 +0000 UTC m=+1625.695449731" observedRunningTime="2026-01-20 15:17:10.586268065 +0000 UTC m=+1626.396098923" watchObservedRunningTime="2026-01-20 15:17:10.593399932 +0000 UTC m=+1626.403230790" Jan 20 15:17:12 crc kubenswrapper[4949]: I0120 15:17:12.692190 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:12 crc kubenswrapper[4949]: I0120 15:17:12.692820 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:12 crc kubenswrapper[4949]: I0120 15:17:12.737656 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:13 crc kubenswrapper[4949]: I0120 15:17:13.699174 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:13 crc kubenswrapper[4949]: I0120 15:17:13.771905 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:14 crc kubenswrapper[4949]: I0120 15:17:14.617898 4949 generic.go:334] "Generic (PLEG): container finished" podID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerID="1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6" exitCode=0 Jan 20 15:17:14 crc kubenswrapper[4949]: I0120 15:17:14.617981 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerDied","Data":"1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6"} Jan 20 15:17:15 crc kubenswrapper[4949]: I0120 15:17:15.045326 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:17:15 crc kubenswrapper[4949]: I0120 15:17:15.055501 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lbd6l"] Jan 20 15:17:15 crc kubenswrapper[4949]: I0120 15:17:15.624493 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwf8t" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" containerID="cri-o://acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" gracePeriod=2 Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.024125 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.113502 4949 scope.go:117] "RemoveContainer" containerID="db80a4f0bdc48f37dc22bc58775d3f05dd7c013f54339b1f5661562fd9df7daa" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.147824 4949 scope.go:117] "RemoveContainer" containerID="a781bfdfd8762ae5e24e9222dfc90fa11c886930c4dbb418962538438aae1ac6" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.176575 4949 scope.go:117] "RemoveContainer" containerID="1b29787e73d44fce82b44b4dc092f944512be0b9918fd3a1f7b95398ec00eb0f" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.182040 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") pod \"949e48ac-89ca-4f38-886e-fd951c7d7217\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.182164 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") pod \"949e48ac-89ca-4f38-886e-fd951c7d7217\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.182266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") pod \"949e48ac-89ca-4f38-886e-fd951c7d7217\" (UID: \"949e48ac-89ca-4f38-886e-fd951c7d7217\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.187949 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv" (OuterVolumeSpecName: "kube-api-access-fnpdv") pod "949e48ac-89ca-4f38-886e-fd951c7d7217" (UID: "949e48ac-89ca-4f38-886e-fd951c7d7217"). InnerVolumeSpecName "kube-api-access-fnpdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.211542 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "949e48ac-89ca-4f38-886e-fd951c7d7217" (UID: "949e48ac-89ca-4f38-886e-fd951c7d7217"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.212150 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory" (OuterVolumeSpecName: "inventory") pod "949e48ac-89ca-4f38-886e-fd951c7d7217" (UID: "949e48ac-89ca-4f38-886e-fd951c7d7217"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.248125 4949 scope.go:117] "RemoveContainer" containerID="3317c3d6f4446853e9f40dfeb54dd548432b68af8205e468da2990c7a1c463c4" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.285572 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnpdv\" (UniqueName: \"kubernetes.io/projected/949e48ac-89ca-4f38-886e-fd951c7d7217-kube-api-access-fnpdv\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.285614 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.285627 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/949e48ac-89ca-4f38-886e-fd951c7d7217-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.292655 4949 scope.go:117] "RemoveContainer" containerID="c597795c21e284cf8447b4c1ba489d0c9f85fbd9dd3ef4fe3d4ba5bb6bd98cfb" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.313030 4949 scope.go:117] "RemoveContainer" containerID="e4c82d229c717e5c0ffde6b9f00c036b0384157d1d756dd1f0e6b2ffaf868b06" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.339062 4949 scope.go:117] "RemoveContainer" containerID="16aca3788ba46fca2c3a4e2db01394682bdf190975c465ad5615866366e0a008" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.369316 4949 scope.go:117] "RemoveContainer" containerID="d629aa6c999c4680b1c85169158551de91f7a34a4f27afe1607eb228257fc70c" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.486177 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591014 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") pod \"5fd12601-236e-4205-a994-2202832cf5a2\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591275 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") pod \"5fd12601-236e-4205-a994-2202832cf5a2\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591329 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") pod \"5fd12601-236e-4205-a994-2202832cf5a2\" (UID: \"5fd12601-236e-4205-a994-2202832cf5a2\") " Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.591905 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities" (OuterVolumeSpecName: "utilities") pod "5fd12601-236e-4205-a994-2202832cf5a2" (UID: "5fd12601-236e-4205-a994-2202832cf5a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.596568 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5" (OuterVolumeSpecName: "kube-api-access-l7tq5") pod "5fd12601-236e-4205-a994-2202832cf5a2" (UID: "5fd12601-236e-4205-a994-2202832cf5a2"). InnerVolumeSpecName "kube-api-access-l7tq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635320 4949 generic.go:334] "Generic (PLEG): container finished" podID="5fd12601-236e-4205-a994-2202832cf5a2" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" exitCode=0 Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635405 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwf8t" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635375 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4"} Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwf8t" event={"ID":"5fd12601-236e-4205-a994-2202832cf5a2","Type":"ContainerDied","Data":"c5c43880117653d176e1498628fccde90a84869ad2ea1f0556fe05553c3d89b2"} Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.635606 4949 scope.go:117] "RemoveContainer" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.637722 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" event={"ID":"949e48ac-89ca-4f38-886e-fd951c7d7217","Type":"ContainerDied","Data":"873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9"} Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.637939 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873ac76801c3a3f71750b6d39db12dda9144be1c865cb5597abe08d643be23c9" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.637756 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.644191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fd12601-236e-4205-a994-2202832cf5a2" (UID: "5fd12601-236e-4205-a994-2202832cf5a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.660423 4949 scope.go:117] "RemoveContainer" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.694031 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7tq5\" (UniqueName: \"kubernetes.io/projected/5fd12601-236e-4205-a994-2202832cf5a2-kube-api-access-l7tq5\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.694075 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.694088 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd12601-236e-4205-a994-2202832cf5a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.701829 4949 scope.go:117] "RemoveContainer" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.709920 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710269 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-utilities" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710285 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-utilities" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710307 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710314 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710330 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710337 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.710350 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-content" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710356 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="extract-content" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710582 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd12601-236e-4205-a994-2202832cf5a2" containerName="registry-server" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.710605 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.711151 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.716592 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.716604 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.716834 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.717407 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.719130 4949 scope.go:117] "RemoveContainer" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.719870 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4\": container with ID starting with acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4 not found: ID does not exist" containerID="acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.719915 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4"} err="failed to get container status \"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4\": rpc error: code = NotFound desc = could not find container \"acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4\": container with ID starting with acb66f70ac580ceafb37eee06ca4a80def3afd672673cd0eaf6e56732f94fef4 not found: ID does not exist" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.719944 4949 scope.go:117] "RemoveContainer" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.720192 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885\": container with ID starting with 85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885 not found: ID does not exist" containerID="85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.720220 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885"} err="failed to get container status \"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885\": rpc error: code = NotFound desc = could not find container \"85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885\": container with ID starting with 85bebe5e5bbc3e870b4baf3afda661a61a91fadcc5c1029357dba3e1e8ad2885 not found: ID does not exist" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.720242 4949 scope.go:117] "RemoveContainer" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" Jan 20 15:17:16 crc kubenswrapper[4949]: E0120 15:17:16.720434 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90\": container with ID starting with 69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90 not found: ID does not exist" containerID="69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.720459 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90"} err="failed to get container status \"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90\": rpc error: code = NotFound desc = could not find container \"69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90\": container with ID starting with 69a8f50be8cc200dd1ff2d3245ed794383cf0ed1f832fdc88bdb86bc828c3a90 not found: ID does not exist" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.722582 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.796306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.796822 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.796899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.808954 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40994e0d-d911-4b6a-9ae9-96fbc4be8a36" path="/var/lib/kubelet/pods/40994e0d-d911-4b6a-9ae9-96fbc4be8a36/volumes" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.899587 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.899994 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.900155 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.904934 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.905185 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:16 crc kubenswrapper[4949]: I0120 15:17:16.924232 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.001759 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.008175 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwf8t"] Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.028920 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.537784 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.646258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerStarted","Data":"dc68b704b8f61a3410ea0c36184e65f1a973cb59056bd6ee16a253749318809d"} Jan 20 15:17:17 crc kubenswrapper[4949]: I0120 15:17:17.789465 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:17 crc kubenswrapper[4949]: E0120 15:17:17.789794 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:18 crc kubenswrapper[4949]: I0120 15:17:18.802888 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd12601-236e-4205-a994-2202832cf5a2" path="/var/lib/kubelet/pods/5fd12601-236e-4205-a994-2202832cf5a2/volumes" Jan 20 15:17:19 crc kubenswrapper[4949]: I0120 15:17:19.671410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerStarted","Data":"61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c"} Jan 20 15:17:19 crc kubenswrapper[4949]: I0120 15:17:19.699729 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" podStartSLOduration=2.117989741 podStartE2EDuration="3.699706492s" podCreationTimestamp="2026-01-20 15:17:16 +0000 UTC" firstStartedPulling="2026-01-20 15:17:17.536823992 +0000 UTC m=+1633.346654850" lastFinishedPulling="2026-01-20 15:17:19.118540713 +0000 UTC m=+1634.928371601" observedRunningTime="2026-01-20 15:17:19.692111503 +0000 UTC m=+1635.501942371" watchObservedRunningTime="2026-01-20 15:17:19.699706492 +0000 UTC m=+1635.509537380" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.887059 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.892013 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.910350 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.929053 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.929116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:20 crc kubenswrapper[4949]: I0120 15:17:20.929146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.030775 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.030959 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.031007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.031452 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.031586 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.053231 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"redhat-operators-sc76c\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.208737 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:21 crc kubenswrapper[4949]: W0120 15:17:21.688178 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f10dbf2_f865_4fd3_b475_e34fd1a18aa4.slice/crio-a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707 WatchSource:0}: Error finding container a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707: Status 404 returned error can't find the container with id a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707 Jan 20 15:17:21 crc kubenswrapper[4949]: I0120 15:17:21.694042 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:22 crc kubenswrapper[4949]: I0120 15:17:22.697409 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" exitCode=0 Jan 20 15:17:22 crc kubenswrapper[4949]: I0120 15:17:22.697464 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5"} Jan 20 15:17:22 crc kubenswrapper[4949]: I0120 15:17:22.697948 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerStarted","Data":"a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707"} Jan 20 15:17:23 crc kubenswrapper[4949]: I0120 15:17:23.707710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerStarted","Data":"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c"} Jan 20 15:17:25 crc kubenswrapper[4949]: I0120 15:17:25.724448 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" exitCode=0 Jan 20 15:17:25 crc kubenswrapper[4949]: I0120 15:17:25.724493 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c"} Jan 20 15:17:26 crc kubenswrapper[4949]: I0120 15:17:26.029917 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:17:26 crc kubenswrapper[4949]: I0120 15:17:26.040472 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j9pm7"] Jan 20 15:17:26 crc kubenswrapper[4949]: I0120 15:17:26.800365 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f96f008-7e3c-4512-bddd-51e42a0c7ce2" path="/var/lib/kubelet/pods/1f96f008-7e3c-4512-bddd-51e42a0c7ce2/volumes" Jan 20 15:17:27 crc kubenswrapper[4949]: I0120 15:17:27.031482 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:17:27 crc kubenswrapper[4949]: I0120 15:17:27.043989 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vx8lk"] Jan 20 15:17:28 crc kubenswrapper[4949]: I0120 15:17:28.759222 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerStarted","Data":"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6"} Jan 20 15:17:28 crc kubenswrapper[4949]: I0120 15:17:28.800921 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b5f79a-1adc-4ec3-a257-ce37600d2357" path="/var/lib/kubelet/pods/26b5f79a-1adc-4ec3-a257-ce37600d2357/volumes" Jan 20 15:17:29 crc kubenswrapper[4949]: I0120 15:17:29.796924 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sc76c" podStartSLOduration=4.359775904 podStartE2EDuration="9.796903828s" podCreationTimestamp="2026-01-20 15:17:20 +0000 UTC" firstStartedPulling="2026-01-20 15:17:22.700419023 +0000 UTC m=+1638.510249891" lastFinishedPulling="2026-01-20 15:17:28.137546957 +0000 UTC m=+1643.947377815" observedRunningTime="2026-01-20 15:17:29.790757095 +0000 UTC m=+1645.600587973" watchObservedRunningTime="2026-01-20 15:17:29.796903828 +0000 UTC m=+1645.606734706" Jan 20 15:17:30 crc kubenswrapper[4949]: I0120 15:17:30.790969 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:30 crc kubenswrapper[4949]: E0120 15:17:30.791500 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:31 crc kubenswrapper[4949]: I0120 15:17:31.209194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:31 crc kubenswrapper[4949]: I0120 15:17:31.209868 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:32 crc kubenswrapper[4949]: I0120 15:17:32.274626 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sc76c" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" probeResult="failure" output=< Jan 20 15:17:32 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:17:32 crc kubenswrapper[4949]: > Jan 20 15:17:33 crc kubenswrapper[4949]: I0120 15:17:33.046410 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:17:33 crc kubenswrapper[4949]: I0120 15:17:33.056238 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lm4wz"] Jan 20 15:17:34 crc kubenswrapper[4949]: I0120 15:17:34.801291 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f476712d-366a-4948-b282-66660a6d81c4" path="/var/lib/kubelet/pods/f476712d-366a-4948-b282-66660a6d81c4/volumes" Jan 20 15:17:41 crc kubenswrapper[4949]: I0120 15:17:41.270339 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:41 crc kubenswrapper[4949]: I0120 15:17:41.324293 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:41 crc kubenswrapper[4949]: I0120 15:17:41.516690 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:42 crc kubenswrapper[4949]: I0120 15:17:42.788753 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:42 crc kubenswrapper[4949]: E0120 15:17:42.789354 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:17:42 crc kubenswrapper[4949]: I0120 15:17:42.886532 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sc76c" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" containerID="cri-o://2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" gracePeriod=2 Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.304564 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.340996 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") pod \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.341196 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") pod \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.341244 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") pod \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\" (UID: \"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4\") " Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.344297 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities" (OuterVolumeSpecName: "utilities") pod "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" (UID: "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.348485 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j" (OuterVolumeSpecName: "kube-api-access-pww4j") pod "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" (UID: "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4"). InnerVolumeSpecName "kube-api-access-pww4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.443147 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pww4j\" (UniqueName: \"kubernetes.io/projected/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-kube-api-access-pww4j\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.443188 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.478880 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" (UID: "3f10dbf2-f865-4fd3-b475-e34fd1a18aa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.545032 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897790 4949 generic.go:334] "Generic (PLEG): container finished" podID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" exitCode=0 Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897850 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6"} Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc76c" event={"ID":"3f10dbf2-f865-4fd3-b475-e34fd1a18aa4","Type":"ContainerDied","Data":"a597b86fa4fe4d8d378cd01f98b0b8daf8c357e9b427b56e703b7aa9ec697707"} Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.897904 4949 scope.go:117] "RemoveContainer" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.898096 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc76c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.926098 4949 scope.go:117] "RemoveContainer" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.940934 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.948859 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sc76c"] Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.955437 4949 scope.go:117] "RemoveContainer" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.992621 4949 scope.go:117] "RemoveContainer" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" Jan 20 15:17:43 crc kubenswrapper[4949]: E0120 15:17:43.993240 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6\": container with ID starting with 2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6 not found: ID does not exist" containerID="2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.993387 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6"} err="failed to get container status \"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6\": rpc error: code = NotFound desc = could not find container \"2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6\": container with ID starting with 2608b248ff789dd37f5bb10d4d9ea060daf520e62e53c57d15365f83725fe5f6 not found: ID does not exist" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.993506 4949 scope.go:117] "RemoveContainer" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" Jan 20 15:17:43 crc kubenswrapper[4949]: E0120 15:17:43.994066 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c\": container with ID starting with 08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c not found: ID does not exist" containerID="08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.994100 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c"} err="failed to get container status \"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c\": rpc error: code = NotFound desc = could not find container \"08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c\": container with ID starting with 08eec20461976bb3d3bb5cf86e71a0a7363c750a84e1c6db6b394d7014ff995c not found: ID does not exist" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.994141 4949 scope.go:117] "RemoveContainer" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" Jan 20 15:17:43 crc kubenswrapper[4949]: E0120 15:17:43.994396 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5\": container with ID starting with a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5 not found: ID does not exist" containerID="a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5" Jan 20 15:17:43 crc kubenswrapper[4949]: I0120 15:17:43.994428 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5"} err="failed to get container status \"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5\": rpc error: code = NotFound desc = could not find container \"a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5\": container with ID starting with a2e63cbdcda5afd418c0fd9f665ef73eae6cbce4469222349d824c776dc322d5 not found: ID does not exist" Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.037608 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.050307 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2fwjt"] Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.804406 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" path="/var/lib/kubelet/pods/3f10dbf2-f865-4fd3-b475-e34fd1a18aa4/volumes" Jan 20 15:17:44 crc kubenswrapper[4949]: I0120 15:17:44.806033 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18369cb-0b5b-40f7-bc73-af04fb510f31" path="/var/lib/kubelet/pods/c18369cb-0b5b-40f7-bc73-af04fb510f31/volumes" Jan 20 15:17:53 crc kubenswrapper[4949]: I0120 15:17:53.789618 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:17:53 crc kubenswrapper[4949]: E0120 15:17:53.791116 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:07 crc kubenswrapper[4949]: I0120 15:18:07.789817 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:07 crc kubenswrapper[4949]: E0120 15:18:07.791385 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:15 crc kubenswrapper[4949]: I0120 15:18:15.191370 4949 generic.go:334] "Generic (PLEG): container finished" podID="744449f9-40c5-4c12-944e-f9ff875daf40" containerID="61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c" exitCode=0 Jan 20 15:18:15 crc kubenswrapper[4949]: I0120 15:18:15.191448 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerDied","Data":"61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c"} Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.557256 4949 scope.go:117] "RemoveContainer" containerID="f8b3cfedae50e77bf3dc2206f556c9d7bad02daab241ca1f62eeff8bbb5e7df7" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.642291 4949 scope.go:117] "RemoveContainer" containerID="8cb523447a664ee7d1c2eb08354a595f8dd6a512b238d12f561592cd541bb7a7" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.662538 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.687271 4949 scope.go:117] "RemoveContainer" containerID="32ab9bcaba594aad212f54775fb1f42c09b044512f762d52c287bc1ce60443b2" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.733486 4949 scope.go:117] "RemoveContainer" containerID="aed7fe52bc151294271b4f9cd142d75f94b93f932573c90067784cdc82a30aad" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.772068 4949 scope.go:117] "RemoveContainer" containerID="57c84a2f332d6d3e1141d495167c3115a7ad4da207ef63deed652fdc8cda50e5" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.799145 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") pod \"744449f9-40c5-4c12-944e-f9ff875daf40\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.799273 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") pod \"744449f9-40c5-4c12-944e-f9ff875daf40\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.799320 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") pod \"744449f9-40c5-4c12-944e-f9ff875daf40\" (UID: \"744449f9-40c5-4c12-944e-f9ff875daf40\") " Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.806830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj" (OuterVolumeSpecName: "kube-api-access-dqqpj") pod "744449f9-40c5-4c12-944e-f9ff875daf40" (UID: "744449f9-40c5-4c12-944e-f9ff875daf40"). InnerVolumeSpecName "kube-api-access-dqqpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.825321 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory" (OuterVolumeSpecName: "inventory") pod "744449f9-40c5-4c12-944e-f9ff875daf40" (UID: "744449f9-40c5-4c12-944e-f9ff875daf40"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.826490 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "744449f9-40c5-4c12-944e-f9ff875daf40" (UID: "744449f9-40c5-4c12-944e-f9ff875daf40"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.901348 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.901374 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqqpj\" (UniqueName: \"kubernetes.io/projected/744449f9-40c5-4c12-944e-f9ff875daf40-kube-api-access-dqqpj\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:16 crc kubenswrapper[4949]: I0120 15:18:16.901386 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/744449f9-40c5-4c12-944e-f9ff875daf40-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.218476 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" event={"ID":"744449f9-40c5-4c12-944e-f9ff875daf40","Type":"ContainerDied","Data":"dc68b704b8f61a3410ea0c36184e65f1a973cb59056bd6ee16a253749318809d"} Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.218908 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc68b704b8f61a3410ea0c36184e65f1a973cb59056bd6ee16a253749318809d" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.218861 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.311755 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312115 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312132 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312148 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-utilities" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312154 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-utilities" Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312178 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-content" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312184 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="extract-content" Jan 20 15:18:17 crc kubenswrapper[4949]: E0120 15:18:17.312194 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312199 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312349 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f10dbf2-f865-4fd3-b475-e34fd1a18aa4" containerName="registry-server" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.312365 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.313053 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.321456 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.321755 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.321949 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.322462 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.337131 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.408650 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.408783 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.408831 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.510235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.510367 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.510421 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.514138 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.514850 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.529675 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ssh-known-hosts-edpm-deployment-97647\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:17 crc kubenswrapper[4949]: I0120 15:18:17.630315 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:18 crc kubenswrapper[4949]: I0120 15:18:18.200252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:18:18 crc kubenswrapper[4949]: I0120 15:18:18.226449 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerStarted","Data":"5b9fae0c5c441902225f4e9ac8b64a3899150e077640dbb8ba4dda6e2793c7ed"} Jan 20 15:18:18 crc kubenswrapper[4949]: I0120 15:18:18.789472 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:18 crc kubenswrapper[4949]: E0120 15:18:18.790436 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:19 crc kubenswrapper[4949]: I0120 15:18:19.249856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerStarted","Data":"34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1"} Jan 20 15:18:19 crc kubenswrapper[4949]: I0120 15:18:19.268005 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-97647" podStartSLOduration=1.764240357 podStartE2EDuration="2.267986595s" podCreationTimestamp="2026-01-20 15:18:17 +0000 UTC" firstStartedPulling="2026-01-20 15:18:18.205463806 +0000 UTC m=+1694.015294664" lastFinishedPulling="2026-01-20 15:18:18.709210044 +0000 UTC m=+1694.519040902" observedRunningTime="2026-01-20 15:18:19.262635948 +0000 UTC m=+1695.072466806" watchObservedRunningTime="2026-01-20 15:18:19.267986595 +0000 UTC m=+1695.077817453" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.069280 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.078819 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.087078 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xh75b"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.095216 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.107403 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.114017 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.122055 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.128226 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8sgnq"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.134466 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2b86-account-create-update-htsxk"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.142379 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8ce0-account-create-update-zqqvh"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.150545 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-p4ss7"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.158809 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9780-account-create-update-7t5m4"] Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.800934 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170f8463-ece8-42b9-944f-b4adcc22e897" path="/var/lib/kubelet/pods/170f8463-ece8-42b9-944f-b4adcc22e897/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.801601 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3187f0f3-7689-4faf-92cc-8d869ef8ecd9" path="/var/lib/kubelet/pods/3187f0f3-7689-4faf-92cc-8d869ef8ecd9/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.802171 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6572b1b9-85e4-4ede-879f-754c173433d1" path="/var/lib/kubelet/pods/6572b1b9-85e4-4ede-879f-754c173433d1/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.802798 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c4f23f-5c92-4f03-a457-6fe5ddc27eec" path="/var/lib/kubelet/pods/91c4f23f-5c92-4f03-a457-6fe5ddc27eec/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.803973 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956eb935-630a-49f6-8b3e-e5053edea66b" path="/var/lib/kubelet/pods/956eb935-630a-49f6-8b3e-e5053edea66b/volumes" Jan 20 15:18:22 crc kubenswrapper[4949]: I0120 15:18:22.804583 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9468e8c-1c76-4f4f-a3da-1cbc82ea418c" path="/var/lib/kubelet/pods/c9468e8c-1c76-4f4f-a3da-1cbc82ea418c/volumes" Jan 20 15:18:27 crc kubenswrapper[4949]: I0120 15:18:27.328789 4949 generic.go:334] "Generic (PLEG): container finished" podID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerID="34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1" exitCode=0 Jan 20 15:18:27 crc kubenswrapper[4949]: I0120 15:18:27.329050 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerDied","Data":"34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1"} Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.822882 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.971322 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") pod \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.971384 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") pod \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.971534 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") pod \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\" (UID: \"ba3f2ff6-def1-41aa-8918-32399eb1a55b\") " Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.977427 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k" (OuterVolumeSpecName: "kube-api-access-vr62k") pod "ba3f2ff6-def1-41aa-8918-32399eb1a55b" (UID: "ba3f2ff6-def1-41aa-8918-32399eb1a55b"). InnerVolumeSpecName "kube-api-access-vr62k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:28 crc kubenswrapper[4949]: I0120 15:18:28.995690 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ba3f2ff6-def1-41aa-8918-32399eb1a55b" (UID: "ba3f2ff6-def1-41aa-8918-32399eb1a55b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.002684 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba3f2ff6-def1-41aa-8918-32399eb1a55b" (UID: "ba3f2ff6-def1-41aa-8918-32399eb1a55b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.073543 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr62k\" (UniqueName: \"kubernetes.io/projected/ba3f2ff6-def1-41aa-8918-32399eb1a55b-kube-api-access-vr62k\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.073585 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.073598 4949 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ba3f2ff6-def1-41aa-8918-32399eb1a55b-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.345790 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-97647" event={"ID":"ba3f2ff6-def1-41aa-8918-32399eb1a55b","Type":"ContainerDied","Data":"5b9fae0c5c441902225f4e9ac8b64a3899150e077640dbb8ba4dda6e2793c7ed"} Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.345820 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-97647" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.345840 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9fae0c5c441902225f4e9ac8b64a3899150e077640dbb8ba4dda6e2793c7ed" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.416902 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:18:29 crc kubenswrapper[4949]: E0120 15:18:29.417342 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.417364 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.417595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.418359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.421275 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.421397 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.421539 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.422539 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.437228 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.481453 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.481586 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.481803 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.582878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.582956 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.583030 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.594787 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.596453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.602363 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwmt2\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:29 crc kubenswrapper[4949]: I0120 15:18:29.733001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:30 crc kubenswrapper[4949]: I0120 15:18:30.243234 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:18:30 crc kubenswrapper[4949]: I0120 15:18:30.354302 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerStarted","Data":"f2367ef4f1c65962ade48b16cde42460e0cc46b4c9249a9e08f94884bddb73fe"} Jan 20 15:18:33 crc kubenswrapper[4949]: I0120 15:18:33.407673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerStarted","Data":"56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82"} Jan 20 15:18:33 crc kubenswrapper[4949]: I0120 15:18:33.434609 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" podStartSLOduration=2.269525947 podStartE2EDuration="4.434587859s" podCreationTimestamp="2026-01-20 15:18:29 +0000 UTC" firstStartedPulling="2026-01-20 15:18:30.253357622 +0000 UTC m=+1706.063188480" lastFinishedPulling="2026-01-20 15:18:32.418419534 +0000 UTC m=+1708.228250392" observedRunningTime="2026-01-20 15:18:33.429297094 +0000 UTC m=+1709.239127952" watchObservedRunningTime="2026-01-20 15:18:33.434587859 +0000 UTC m=+1709.244418717" Jan 20 15:18:33 crc kubenswrapper[4949]: I0120 15:18:33.789475 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:33 crc kubenswrapper[4949]: E0120 15:18:33.789744 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:41 crc kubenswrapper[4949]: I0120 15:18:41.469485 4949 generic.go:334] "Generic (PLEG): container finished" podID="ab73db4b-4663-4234-be64-866efa186f5a" containerID="56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82" exitCode=0 Jan 20 15:18:41 crc kubenswrapper[4949]: I0120 15:18:41.469579 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerDied","Data":"56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82"} Jan 20 15:18:42 crc kubenswrapper[4949]: I0120 15:18:42.918579 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.031172 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") pod \"ab73db4b-4663-4234-be64-866efa186f5a\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.031234 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") pod \"ab73db4b-4663-4234-be64-866efa186f5a\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.031267 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") pod \"ab73db4b-4663-4234-be64-866efa186f5a\" (UID: \"ab73db4b-4663-4234-be64-866efa186f5a\") " Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.036792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6" (OuterVolumeSpecName: "kube-api-access-jdkm6") pod "ab73db4b-4663-4234-be64-866efa186f5a" (UID: "ab73db4b-4663-4234-be64-866efa186f5a"). InnerVolumeSpecName "kube-api-access-jdkm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.056164 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab73db4b-4663-4234-be64-866efa186f5a" (UID: "ab73db4b-4663-4234-be64-866efa186f5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.056706 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory" (OuterVolumeSpecName: "inventory") pod "ab73db4b-4663-4234-be64-866efa186f5a" (UID: "ab73db4b-4663-4234-be64-866efa186f5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.133016 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.133061 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab73db4b-4663-4234-be64-866efa186f5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.133078 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdkm6\" (UniqueName: \"kubernetes.io/projected/ab73db4b-4663-4234-be64-866efa186f5a-kube-api-access-jdkm6\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.497784 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" event={"ID":"ab73db4b-4663-4234-be64-866efa186f5a","Type":"ContainerDied","Data":"f2367ef4f1c65962ade48b16cde42460e0cc46b4c9249a9e08f94884bddb73fe"} Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.498019 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2367ef4f1c65962ade48b16cde42460e0cc46b4c9249a9e08f94884bddb73fe" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.497939 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.609128 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:18:43 crc kubenswrapper[4949]: E0120 15:18:43.609882 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab73db4b-4663-4234-be64-866efa186f5a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.609918 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab73db4b-4663-4234-be64-866efa186f5a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.610342 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab73db4b-4663-4234-be64-866efa186f5a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.611709 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.615868 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.615996 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.617212 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.622617 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.623114 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.742857 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.742999 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.743109 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.845272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.845477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.845577 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.850281 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.851823 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.863473 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:43 crc kubenswrapper[4949]: I0120 15:18:43.940934 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:44 crc kubenswrapper[4949]: I0120 15:18:44.534476 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:18:44 crc kubenswrapper[4949]: W0120 15:18:44.554337 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd10292b4_ea8f_4236_8d89_3b97f21a04cb.slice/crio-dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d WatchSource:0}: Error finding container dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d: Status 404 returned error can't find the container with id dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d Jan 20 15:18:44 crc kubenswrapper[4949]: I0120 15:18:44.794894 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:44 crc kubenswrapper[4949]: E0120 15:18:44.795500 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:18:45 crc kubenswrapper[4949]: I0120 15:18:45.519028 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerStarted","Data":"e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2"} Jan 20 15:18:45 crc kubenswrapper[4949]: I0120 15:18:45.519678 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerStarted","Data":"dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d"} Jan 20 15:18:45 crc kubenswrapper[4949]: I0120 15:18:45.539680 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" podStartSLOduration=2.071206126 podStartE2EDuration="2.539654344s" podCreationTimestamp="2026-01-20 15:18:43 +0000 UTC" firstStartedPulling="2026-01-20 15:18:44.558158719 +0000 UTC m=+1720.367989577" lastFinishedPulling="2026-01-20 15:18:45.026606927 +0000 UTC m=+1720.836437795" observedRunningTime="2026-01-20 15:18:45.535881576 +0000 UTC m=+1721.345712474" watchObservedRunningTime="2026-01-20 15:18:45.539654344 +0000 UTC m=+1721.349485232" Jan 20 15:18:50 crc kubenswrapper[4949]: I0120 15:18:50.081123 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:18:50 crc kubenswrapper[4949]: I0120 15:18:50.095857 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-845d4"] Jan 20 15:18:50 crc kubenswrapper[4949]: I0120 15:18:50.802470 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d68b174-da83-41e7-804c-68a858beedf7" path="/var/lib/kubelet/pods/5d68b174-da83-41e7-804c-68a858beedf7/volumes" Jan 20 15:18:55 crc kubenswrapper[4949]: I0120 15:18:55.616575 4949 generic.go:334] "Generic (PLEG): container finished" podID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerID="e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2" exitCode=0 Jan 20 15:18:55 crc kubenswrapper[4949]: I0120 15:18:55.616641 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerDied","Data":"e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2"} Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.105746 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.149640 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") pod \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.149808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") pod \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.149858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") pod \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\" (UID: \"d10292b4-ea8f-4236-8d89-3b97f21a04cb\") " Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.157792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v" (OuterVolumeSpecName: "kube-api-access-cgb8v") pod "d10292b4-ea8f-4236-8d89-3b97f21a04cb" (UID: "d10292b4-ea8f-4236-8d89-3b97f21a04cb"). InnerVolumeSpecName "kube-api-access-cgb8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.178094 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d10292b4-ea8f-4236-8d89-3b97f21a04cb" (UID: "d10292b4-ea8f-4236-8d89-3b97f21a04cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.183256 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory" (OuterVolumeSpecName: "inventory") pod "d10292b4-ea8f-4236-8d89-3b97f21a04cb" (UID: "d10292b4-ea8f-4236-8d89-3b97f21a04cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.251084 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.251118 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d10292b4-ea8f-4236-8d89-3b97f21a04cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.251131 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgb8v\" (UniqueName: \"kubernetes.io/projected/d10292b4-ea8f-4236-8d89-3b97f21a04cb-kube-api-access-cgb8v\") on node \"crc\" DevicePath \"\"" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.645791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" event={"ID":"d10292b4-ea8f-4236-8d89-3b97f21a04cb","Type":"ContainerDied","Data":"dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d"} Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.645907 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc11db9a06eb60e59455f5aebd80ae6245594ca40e389b469d94dc70e01a94d" Jan 20 15:18:57 crc kubenswrapper[4949]: I0120 15:18:57.645918 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4" Jan 20 15:18:58 crc kubenswrapper[4949]: I0120 15:18:58.793031 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:18:58 crc kubenswrapper[4949]: E0120 15:18:58.798833 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.066728 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.073969 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pxxs"] Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.790810 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:12 crc kubenswrapper[4949]: E0120 15:19:12.791298 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:12 crc kubenswrapper[4949]: I0120 15:19:12.816373 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5364ff4f-3ee5-4577-b82c-0c094bd55125" path="/var/lib/kubelet/pods/5364ff4f-3ee5-4577-b82c-0c094bd55125/volumes" Jan 20 15:19:13 crc kubenswrapper[4949]: I0120 15:19:13.069751 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:19:13 crc kubenswrapper[4949]: I0120 15:19:13.088478 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n8g8k"] Jan 20 15:19:14 crc kubenswrapper[4949]: I0120 15:19:14.802471 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883cbf80-263a-4fc7-b962-147019f05553" path="/var/lib/kubelet/pods/883cbf80-263a-4fc7-b962-147019f05553/volumes" Jan 20 15:19:16 crc kubenswrapper[4949]: I0120 15:19:16.911934 4949 scope.go:117] "RemoveContainer" containerID="24dbf49c8beca72a4d37ee3920737a645e4fe60fe68139ee7aef223996ccfdb6" Jan 20 15:19:16 crc kubenswrapper[4949]: I0120 15:19:16.952099 4949 scope.go:117] "RemoveContainer" containerID="3f15d6945e44e9c6e53794e87d22474ffe01f158595e34834b396d2b04dfd49c" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.029579 4949 scope.go:117] "RemoveContainer" containerID="a80d33e24a9ba74ceb162bb93f7bd8ff3d5731341ef1bf722ccfa8d027aff1fd" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.060718 4949 scope.go:117] "RemoveContainer" containerID="135008a156949889d1049508e72bc07f9183b62985200f63db1952335429a011" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.094100 4949 scope.go:117] "RemoveContainer" containerID="8b0cc583724d3b927981b50c04490bc942db17a6a69e1c60b8114fe3f564f67a" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.169967 4949 scope.go:117] "RemoveContainer" containerID="ec2dae8432df7c2929adea704eae50d10b8a89f1c8b6ef4c8463800765d6dc4d" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.185267 4949 scope.go:117] "RemoveContainer" containerID="ea9b847c91449323272eebae1f55f6d7768779cc32907a027b2a7c8dfb6cb9ec" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.219183 4949 scope.go:117] "RemoveContainer" containerID="938e9ce45ea628368ef94bbc41df4467906d61f367eb62e455efc51fc6c3edfd" Jan 20 15:19:17 crc kubenswrapper[4949]: I0120 15:19:17.235621 4949 scope.go:117] "RemoveContainer" containerID="1234260b184752a89b6e70a1ae59d09a4b3f7d03f7fb974dc5afeaccba79232f" Jan 20 15:19:27 crc kubenswrapper[4949]: I0120 15:19:27.789504 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:27 crc kubenswrapper[4949]: E0120 15:19:27.790496 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:40 crc kubenswrapper[4949]: I0120 15:19:40.789108 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:40 crc kubenswrapper[4949]: E0120 15:19:40.789736 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:53 crc kubenswrapper[4949]: I0120 15:19:53.789460 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:19:53 crc kubenswrapper[4949]: E0120 15:19:53.790467 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:19:57 crc kubenswrapper[4949]: I0120 15:19:57.040333 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:19:57 crc kubenswrapper[4949]: I0120 15:19:57.051742 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jgctz"] Jan 20 15:19:58 crc kubenswrapper[4949]: I0120 15:19:58.800995 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462eb38e-1d62-43e2-92c4-1074a1c054b9" path="/var/lib/kubelet/pods/462eb38e-1d62-43e2-92c4-1074a1c054b9/volumes" Jan 20 15:20:05 crc kubenswrapper[4949]: I0120 15:20:05.788685 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:20:05 crc kubenswrapper[4949]: E0120 15:20:05.789320 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:20:17 crc kubenswrapper[4949]: I0120 15:20:17.379806 4949 scope.go:117] "RemoveContainer" containerID="4bacb42c86db9d32cafede00ec29f8308a27e34795cfa26fb587384d2da7e640" Jan 20 15:20:19 crc kubenswrapper[4949]: I0120 15:20:19.789690 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:20:19 crc kubenswrapper[4949]: E0120 15:20:19.790489 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:20:30 crc kubenswrapper[4949]: I0120 15:20:30.789982 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:20:31 crc kubenswrapper[4949]: I0120 15:20:31.535395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c"} Jan 20 15:22:57 crc kubenswrapper[4949]: I0120 15:22:57.152652 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:22:57 crc kubenswrapper[4949]: I0120 15:22:57.153237 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:23:27 crc kubenswrapper[4949]: I0120 15:23:27.152030 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:23:27 crc kubenswrapper[4949]: I0120 15:23:27.152458 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.817615 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.835418 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-97647"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.846383 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.854503 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.861584 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.870844 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.878985 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.887258 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.895230 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.903694 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.911550 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-5h8ll"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.918005 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-l5wgq"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.923627 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rx986"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.929226 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwmt2"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.935050 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ptr2f"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.940245 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwnqp"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.945814 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-cfft5"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.951850 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qcxv4"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.957402 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:23:28 crc kubenswrapper[4949]: I0120 15:23:28.962366 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4vw6q"] Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.802455 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af1d203-d1de-4e8b-95cb-7977a46b0042" path="/var/lib/kubelet/pods/3af1d203-d1de-4e8b-95cb-7977a46b0042/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.804053 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b69ef09-6dac-4ebb-b970-9c94553bea5a" path="/var/lib/kubelet/pods/3b69ef09-6dac-4ebb-b970-9c94553bea5a/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.805221 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744449f9-40c5-4c12-944e-f9ff875daf40" path="/var/lib/kubelet/pods/744449f9-40c5-4c12-944e-f9ff875daf40/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.806467 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949e48ac-89ca-4f38-886e-fd951c7d7217" path="/var/lib/kubelet/pods/949e48ac-89ca-4f38-886e-fd951c7d7217/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.808438 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f6253d-b990-4892-bd1f-9534caf70130" path="/var/lib/kubelet/pods/96f6253d-b990-4892-bd1f-9534caf70130/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.809036 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b62cf27-c244-466f-bddd-129a1a3db687" path="/var/lib/kubelet/pods/9b62cf27-c244-466f-bddd-129a1a3db687/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.809609 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab73db4b-4663-4234-be64-866efa186f5a" path="/var/lib/kubelet/pods/ab73db4b-4663-4234-be64-866efa186f5a/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.810607 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3f2ff6-def1-41aa-8918-32399eb1a55b" path="/var/lib/kubelet/pods/ba3f2ff6-def1-41aa-8918-32399eb1a55b/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.811108 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" path="/var/lib/kubelet/pods/d10292b4-ea8f-4236-8d89-3b97f21a04cb/volumes" Jan 20 15:23:30 crc kubenswrapper[4949]: I0120 15:23:30.811613 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d847d1-1215-4c1c-9741-fb2dcf39e42d" path="/var/lib/kubelet/pods/f8d847d1-1215-4c1c-9741-fb2dcf39e42d/volumes" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.444077 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk"] Jan 20 15:23:34 crc kubenswrapper[4949]: E0120 15:23:34.445076 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.445095 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.445482 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10292b4-ea8f-4236-8d89-3b97f21a04cb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.446368 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.450314 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.450477 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.450891 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.451029 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.451347 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.463102 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk"] Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603612 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603935 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603956 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.603991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.604030 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705776 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705898 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.705972 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.713874 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.715152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.722675 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.723035 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.724062 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:34 crc kubenswrapper[4949]: I0120 15:23:34.771157 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:35 crc kubenswrapper[4949]: I0120 15:23:35.301170 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk"] Jan 20 15:23:35 crc kubenswrapper[4949]: I0120 15:23:35.312782 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.224256 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerStarted","Data":"54b2e5fe7c30dbad96979fa5313815e89980d3ef75f992ecd91ec8271bf9fb04"} Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.498499 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.502448 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.504933 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.643764 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.643880 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.643908 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745323 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745356 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.745877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.746388 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.763088 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"redhat-marketplace-gbjkv\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:36 crc kubenswrapper[4949]: I0120 15:23:36.821385 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:37 crc kubenswrapper[4949]: I0120 15:23:37.233079 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerStarted","Data":"c40e9a0a3aa913e3f9829441ef7ab55b34ecce12e55538c1a729cd1d90d48dc8"} Jan 20 15:23:37 crc kubenswrapper[4949]: I0120 15:23:37.282138 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" podStartSLOduration=2.260051714 podStartE2EDuration="3.282122617s" podCreationTimestamp="2026-01-20 15:23:34 +0000 UTC" firstStartedPulling="2026-01-20 15:23:35.312580931 +0000 UTC m=+2011.122411789" lastFinishedPulling="2026-01-20 15:23:36.334651834 +0000 UTC m=+2012.144482692" observedRunningTime="2026-01-20 15:23:37.276871023 +0000 UTC m=+2013.086701881" watchObservedRunningTime="2026-01-20 15:23:37.282122617 +0000 UTC m=+2013.091953475" Jan 20 15:23:37 crc kubenswrapper[4949]: I0120 15:23:37.298019 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:37 crc kubenswrapper[4949]: W0120 15:23:37.300834 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68738f06_f8fa_40ea_8af8_9aad9957433b.slice/crio-28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c WatchSource:0}: Error finding container 28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c: Status 404 returned error can't find the container with id 28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c Jan 20 15:23:38 crc kubenswrapper[4949]: I0120 15:23:38.245831 4949 generic.go:334] "Generic (PLEG): container finished" podID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" exitCode=0 Jan 20 15:23:38 crc kubenswrapper[4949]: I0120 15:23:38.245920 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238"} Jan 20 15:23:38 crc kubenswrapper[4949]: I0120 15:23:38.246102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerStarted","Data":"28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c"} Jan 20 15:23:40 crc kubenswrapper[4949]: I0120 15:23:40.271655 4949 generic.go:334] "Generic (PLEG): container finished" podID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" exitCode=0 Jan 20 15:23:40 crc kubenswrapper[4949]: I0120 15:23:40.271786 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5"} Jan 20 15:23:43 crc kubenswrapper[4949]: I0120 15:23:43.322234 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerStarted","Data":"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87"} Jan 20 15:23:43 crc kubenswrapper[4949]: I0120 15:23:43.343751 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbjkv" podStartSLOduration=2.6413070899999997 podStartE2EDuration="7.343733404s" podCreationTimestamp="2026-01-20 15:23:36 +0000 UTC" firstStartedPulling="2026-01-20 15:23:38.2479518 +0000 UTC m=+2014.057782658" lastFinishedPulling="2026-01-20 15:23:42.950378114 +0000 UTC m=+2018.760208972" observedRunningTime="2026-01-20 15:23:43.342055971 +0000 UTC m=+2019.151886839" watchObservedRunningTime="2026-01-20 15:23:43.343733404 +0000 UTC m=+2019.153564272" Jan 20 15:23:46 crc kubenswrapper[4949]: I0120 15:23:46.821917 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:46 crc kubenswrapper[4949]: I0120 15:23:46.822281 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:46 crc kubenswrapper[4949]: I0120 15:23:46.897111 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:49 crc kubenswrapper[4949]: I0120 15:23:49.380461 4949 generic.go:334] "Generic (PLEG): container finished" podID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerID="c40e9a0a3aa913e3f9829441ef7ab55b34ecce12e55538c1a729cd1d90d48dc8" exitCode=0 Jan 20 15:23:49 crc kubenswrapper[4949]: I0120 15:23:49.380548 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerDied","Data":"c40e9a0a3aa913e3f9829441ef7ab55b34ecce12e55538c1a729cd1d90d48dc8"} Jan 20 15:23:50 crc kubenswrapper[4949]: I0120 15:23:50.852116 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031249 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031315 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031412 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031478 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.031562 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") pod \"f5d6330b-b87a-476b-bebc-a790026e5dd3\" (UID: \"f5d6330b-b87a-476b-bebc-a790026e5dd3\") " Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.038320 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.038379 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph" (OuterVolumeSpecName: "ceph") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.039474 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h" (OuterVolumeSpecName: "kube-api-access-lj84h") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "kube-api-access-lj84h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.062677 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory" (OuterVolumeSpecName: "inventory") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.083672 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f5d6330b-b87a-476b-bebc-a790026e5dd3" (UID: "f5d6330b-b87a-476b-bebc-a790026e5dd3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134508 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134563 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134581 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134594 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj84h\" (UniqueName: \"kubernetes.io/projected/f5d6330b-b87a-476b-bebc-a790026e5dd3-kube-api-access-lj84h\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.134607 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d6330b-b87a-476b-bebc-a790026e5dd3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.410839 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" event={"ID":"f5d6330b-b87a-476b-bebc-a790026e5dd3","Type":"ContainerDied","Data":"54b2e5fe7c30dbad96979fa5313815e89980d3ef75f992ecd91ec8271bf9fb04"} Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.410908 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b2e5fe7c30dbad96979fa5313815e89980d3ef75f992ecd91ec8271bf9fb04" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.410924 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.488194 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh"] Jan 20 15:23:51 crc kubenswrapper[4949]: E0120 15:23:51.488668 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.488694 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.488933 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d6330b-b87a-476b-bebc-a790026e5dd3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.489730 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492056 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492390 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492420 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492852 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.492856 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.511189 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh"] Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642481 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642685 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.642771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744627 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744703 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744768 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744800 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.744879 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.751233 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.752007 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.752084 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.752421 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.767870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:51 crc kubenswrapper[4949]: I0120 15:23:51.820560 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:23:52 crc kubenswrapper[4949]: I0120 15:23:52.354851 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh"] Jan 20 15:23:52 crc kubenswrapper[4949]: I0120 15:23:52.427673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerStarted","Data":"0ecd3ebaf4a3c0b15aa73ac23e355a64a8d90922ff7acdf6f56594962dac1f09"} Jan 20 15:23:53 crc kubenswrapper[4949]: I0120 15:23:53.437066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerStarted","Data":"647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b"} Jan 20 15:23:53 crc kubenswrapper[4949]: I0120 15:23:53.461122 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" podStartSLOduration=1.949380109 podStartE2EDuration="2.461099633s" podCreationTimestamp="2026-01-20 15:23:51 +0000 UTC" firstStartedPulling="2026-01-20 15:23:52.370864098 +0000 UTC m=+2028.180694956" lastFinishedPulling="2026-01-20 15:23:52.882583602 +0000 UTC m=+2028.692414480" observedRunningTime="2026-01-20 15:23:53.455480508 +0000 UTC m=+2029.265311366" watchObservedRunningTime="2026-01-20 15:23:53.461099633 +0000 UTC m=+2029.270930491" Jan 20 15:23:56 crc kubenswrapper[4949]: I0120 15:23:56.869473 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:56 crc kubenswrapper[4949]: I0120 15:23:56.934891 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.152844 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.152919 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.152980 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.153981 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.154065 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c" gracePeriod=600 Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.475779 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c" exitCode=0 Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.475976 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gbjkv" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" containerID="cri-o://455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" gracePeriod=2 Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.476046 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c"} Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.476078 4949 scope.go:117] "RemoveContainer" containerID="4da7c1898d63c1e824c9774f1acfa618128d860c4cca92d7135ee2e368ae725a" Jan 20 15:23:57 crc kubenswrapper[4949]: I0120 15:23:57.919823 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.076169 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") pod \"68738f06-f8fa-40ea-8af8-9aad9957433b\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.076335 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") pod \"68738f06-f8fa-40ea-8af8-9aad9957433b\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.077174 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities" (OuterVolumeSpecName: "utilities") pod "68738f06-f8fa-40ea-8af8-9aad9957433b" (UID: "68738f06-f8fa-40ea-8af8-9aad9957433b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.077268 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") pod \"68738f06-f8fa-40ea-8af8-9aad9957433b\" (UID: \"68738f06-f8fa-40ea-8af8-9aad9957433b\") " Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.079103 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.085146 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d" (OuterVolumeSpecName: "kube-api-access-xqx6d") pod "68738f06-f8fa-40ea-8af8-9aad9957433b" (UID: "68738f06-f8fa-40ea-8af8-9aad9957433b"). InnerVolumeSpecName "kube-api-access-xqx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.098597 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68738f06-f8fa-40ea-8af8-9aad9957433b" (UID: "68738f06-f8fa-40ea-8af8-9aad9957433b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.181069 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68738f06-f8fa-40ea-8af8-9aad9957433b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.181114 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqx6d\" (UniqueName: \"kubernetes.io/projected/68738f06-f8fa-40ea-8af8-9aad9957433b-kube-api-access-xqx6d\") on node \"crc\" DevicePath \"\"" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484834 4949 generic.go:334] "Generic (PLEG): container finished" podID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" exitCode=0 Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484904 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87"} Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484956 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbjkv" event={"ID":"68738f06-f8fa-40ea-8af8-9aad9957433b","Type":"ContainerDied","Data":"28875eaf4ca0681342251f749bf077f900bf05326d3032181df1975f0d82617c"} Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.484985 4949 scope.go:117] "RemoveContainer" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.485558 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbjkv" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.491840 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223"} Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.549674 4949 scope.go:117] "RemoveContainer" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.570168 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.574992 4949 scope.go:117] "RemoveContainer" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.577880 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbjkv"] Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.613196 4949 scope.go:117] "RemoveContainer" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" Jan 20 15:23:58 crc kubenswrapper[4949]: E0120 15:23:58.613684 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87\": container with ID starting with 455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87 not found: ID does not exist" containerID="455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.613718 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87"} err="failed to get container status \"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87\": rpc error: code = NotFound desc = could not find container \"455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87\": container with ID starting with 455c34c961525a6f6153232e462c9cc9cf17419f742ae1f57e2660958671de87 not found: ID does not exist" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.613744 4949 scope.go:117] "RemoveContainer" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" Jan 20 15:23:58 crc kubenswrapper[4949]: E0120 15:23:58.614493 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5\": container with ID starting with 0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5 not found: ID does not exist" containerID="0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.614716 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5"} err="failed to get container status \"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5\": rpc error: code = NotFound desc = could not find container \"0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5\": container with ID starting with 0ea29fc27d360cdfc1758f3f6c06127618c946b62b831563b0359562d33db4a5 not found: ID does not exist" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.614818 4949 scope.go:117] "RemoveContainer" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" Jan 20 15:23:58 crc kubenswrapper[4949]: E0120 15:23:58.615369 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238\": container with ID starting with 3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238 not found: ID does not exist" containerID="3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.615477 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238"} err="failed to get container status \"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238\": rpc error: code = NotFound desc = could not find container \"3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238\": container with ID starting with 3bcd07b0cdf2f1a067c90e8cf91eae13f8a6bfefac38063878b833de0065d238 not found: ID does not exist" Jan 20 15:23:58 crc kubenswrapper[4949]: I0120 15:23:58.802442 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" path="/var/lib/kubelet/pods/68738f06-f8fa-40ea-8af8-9aad9957433b/volumes" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.510381 4949 scope.go:117] "RemoveContainer" containerID="97e58c67a0e98003cafb39480a06ac591ad9fe0f96f2a75ba8e22c54e01c1684" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.542768 4949 scope.go:117] "RemoveContainer" containerID="c94adfbb3bcf494a6ed833a1ec72b7c8690cb46c6a6ede8826f4415c70fb76b3" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.633199 4949 scope.go:117] "RemoveContainer" containerID="1d63423a01df08b03bee2e370a0b400c09d1d1f14c81f27c3919b0933a8309a6" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.659436 4949 scope.go:117] "RemoveContainer" containerID="e853b33d218fefaac9eaa8c42597b4fc7a0f0c58f70fdeb9cf7e2318c41718d3" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.729188 4949 scope.go:117] "RemoveContainer" containerID="f219443796223b8979c4f2b7127d3bca2a123adb4cdd20183cbd06a84853e4d3" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.758063 4949 scope.go:117] "RemoveContainer" containerID="61d6a1c0b85df33d5d78057047d5341fbd91345414dd9c25ddd0c69c028e9f1c" Jan 20 15:24:17 crc kubenswrapper[4949]: I0120 15:24:17.818550 4949 scope.go:117] "RemoveContainer" containerID="0aac50da813170e2d292b4794c74f28cf8e895ea7cadf5112dc53f78c6d69624" Jan 20 15:25:17 crc kubenswrapper[4949]: I0120 15:25:17.993845 4949 scope.go:117] "RemoveContainer" containerID="e8f99ca0e0d8a9a7fcf7d552f25de1b26b895f7528eeed64b2b6738375ccaae2" Jan 20 15:25:18 crc kubenswrapper[4949]: I0120 15:25:18.040363 4949 scope.go:117] "RemoveContainer" containerID="34d9fdce66eba37a0cde8429948ec3cf10ca4344d01055582003057c7611eca1" Jan 20 15:25:18 crc kubenswrapper[4949]: I0120 15:25:18.104232 4949 scope.go:117] "RemoveContainer" containerID="56ac5c2088c598e4fcaa7786fdcab5e98d541fa419bbd7c17fa0ae992434ce82" Jan 20 15:25:56 crc kubenswrapper[4949]: E0120 15:25:56.050937 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda7cee45_2ef4_4ebc_8067_08dbe10af76a.slice/crio-647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda7cee45_2ef4_4ebc_8067_08dbe10af76a.slice/crio-conmon-647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:25:56 crc kubenswrapper[4949]: I0120 15:25:56.578107 4949 generic.go:334] "Generic (PLEG): container finished" podID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerID="647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b" exitCode=0 Jan 20 15:25:56 crc kubenswrapper[4949]: I0120 15:25:56.578242 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerDied","Data":"647ed5579b033ad5803f2af7e0d7f740e1b7f9704893ccd801e67ddecce8eb8b"} Jan 20 15:25:57 crc kubenswrapper[4949]: I0120 15:25:57.152134 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:25:57 crc kubenswrapper[4949]: I0120 15:25:57.152190 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.083165 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250200 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250271 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250442 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250482 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.250567 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") pod \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\" (UID: \"da7cee45-2ef4-4ebc-8067-08dbe10af76a\") " Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.257418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw" (OuterVolumeSpecName: "kube-api-access-cvfmw") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "kube-api-access-cvfmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.257563 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.259425 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph" (OuterVolumeSpecName: "ceph") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.277138 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory" (OuterVolumeSpecName: "inventory") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.278304 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da7cee45-2ef4-4ebc-8067-08dbe10af76a" (UID: "da7cee45-2ef4-4ebc-8067-08dbe10af76a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353286 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353324 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvfmw\" (UniqueName: \"kubernetes.io/projected/da7cee45-2ef4-4ebc-8067-08dbe10af76a-kube-api-access-cvfmw\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353339 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353351 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.353362 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7cee45-2ef4-4ebc-8067-08dbe10af76a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.598147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" event={"ID":"da7cee45-2ef4-4ebc-8067-08dbe10af76a","Type":"ContainerDied","Data":"0ecd3ebaf4a3c0b15aa73ac23e355a64a8d90922ff7acdf6f56594962dac1f09"} Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.598569 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ecd3ebaf4a3c0b15aa73ac23e355a64a8d90922ff7acdf6f56594962dac1f09" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.598218 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678346 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv"] Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678700 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-content" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678721 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-content" Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678736 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678743 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678752 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678760 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: E0120 15:25:58.678776 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-utilities" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678783 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="extract-utilities" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678946 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7cee45-2ef4-4ebc-8067-08dbe10af76a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.678963 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="68738f06-f8fa-40ea-8af8-9aad9957433b" containerName="registry-server" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.679608 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.681930 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682201 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682367 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682569 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.682927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.688534 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv"] Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.860887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.860942 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.860974 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.861123 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962561 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962645 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.962697 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.966299 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.966445 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.967296 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.980547 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x77fv\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:58 crc kubenswrapper[4949]: I0120 15:25:58.996544 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:25:59 crc kubenswrapper[4949]: I0120 15:25:59.578809 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv"] Jan 20 15:25:59 crc kubenswrapper[4949]: I0120 15:25:59.607639 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerStarted","Data":"623c249b9edeb9c198576359b340a855c2ad68f727f565e6f53835483010c33b"} Jan 20 15:26:00 crc kubenswrapper[4949]: I0120 15:26:00.617181 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerStarted","Data":"fa1a2396c5378b56d958cdc80810c2f9c6698dd639b9b94cbc5ce91408852fe7"} Jan 20 15:26:00 crc kubenswrapper[4949]: I0120 15:26:00.641148 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" podStartSLOduration=2.141157226 podStartE2EDuration="2.64112695s" podCreationTimestamp="2026-01-20 15:25:58 +0000 UTC" firstStartedPulling="2026-01-20 15:25:59.586155259 +0000 UTC m=+2155.395986117" lastFinishedPulling="2026-01-20 15:26:00.086124983 +0000 UTC m=+2155.895955841" observedRunningTime="2026-01-20 15:26:00.639188582 +0000 UTC m=+2156.449019440" watchObservedRunningTime="2026-01-20 15:26:00.64112695 +0000 UTC m=+2156.450957808" Jan 20 15:26:26 crc kubenswrapper[4949]: I0120 15:26:26.866632 4949 generic.go:334] "Generic (PLEG): container finished" podID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerID="fa1a2396c5378b56d958cdc80810c2f9c6698dd639b9b94cbc5ce91408852fe7" exitCode=0 Jan 20 15:26:26 crc kubenswrapper[4949]: I0120 15:26:26.866751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerDied","Data":"fa1a2396c5378b56d958cdc80810c2f9c6698dd639b9b94cbc5ce91408852fe7"} Jan 20 15:26:27 crc kubenswrapper[4949]: I0120 15:26:27.152023 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:26:27 crc kubenswrapper[4949]: I0120 15:26:27.152396 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.374059 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483154 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483338 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483379 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.483402 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") pod \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\" (UID: \"6951e28c-3b02-44dd-9823-d0e4d1a779d5\") " Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.490667 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np" (OuterVolumeSpecName: "kube-api-access-hq2np") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "kube-api-access-hq2np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.490872 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph" (OuterVolumeSpecName: "ceph") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.513718 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.525452 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory" (OuterVolumeSpecName: "inventory") pod "6951e28c-3b02-44dd-9823-d0e4d1a779d5" (UID: "6951e28c-3b02-44dd-9823-d0e4d1a779d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585460 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585487 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585500 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq2np\" (UniqueName: \"kubernetes.io/projected/6951e28c-3b02-44dd-9823-d0e4d1a779d5-kube-api-access-hq2np\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.585511 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6951e28c-3b02-44dd-9823-d0e4d1a779d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.885285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" event={"ID":"6951e28c-3b02-44dd-9823-d0e4d1a779d5","Type":"ContainerDied","Data":"623c249b9edeb9c198576359b340a855c2ad68f727f565e6f53835483010c33b"} Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.885552 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623c249b9edeb9c198576359b340a855c2ad68f727f565e6f53835483010c33b" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.885371 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x77fv" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.992470 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7"] Jan 20 15:26:28 crc kubenswrapper[4949]: E0120 15:26:28.993055 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.993159 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.993386 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6951e28c-3b02-44dd-9823-d0e4d1a779d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.994039 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.996461 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.996501 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.996474 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.997167 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:26:28 crc kubenswrapper[4949]: I0120 15:26:28.997342 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.007290 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7"] Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095054 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.095306 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197048 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197164 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197203 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.197260 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.204509 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.204561 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.204893 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.217120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.322485 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.829913 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7"] Jan 20 15:26:29 crc kubenswrapper[4949]: I0120 15:26:29.894812 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerStarted","Data":"ac480ed39815ab9b58878a6fa095a000358d36e2dcc4e6676d778165ddcc0f14"} Jan 20 15:26:30 crc kubenswrapper[4949]: I0120 15:26:30.906166 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerStarted","Data":"0e4a123aaf82fcf5f55ab882041725ddfd86b53e7bd4d71e8b50344716c9c44c"} Jan 20 15:26:30 crc kubenswrapper[4949]: I0120 15:26:30.943059 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" podStartSLOduration=2.422857365 podStartE2EDuration="2.942967435s" podCreationTimestamp="2026-01-20 15:26:28 +0000 UTC" firstStartedPulling="2026-01-20 15:26:29.829861316 +0000 UTC m=+2185.639692174" lastFinishedPulling="2026-01-20 15:26:30.349971346 +0000 UTC m=+2186.159802244" observedRunningTime="2026-01-20 15:26:30.924316133 +0000 UTC m=+2186.734146991" watchObservedRunningTime="2026-01-20 15:26:30.942967435 +0000 UTC m=+2186.752798333" Jan 20 15:26:35 crc kubenswrapper[4949]: I0120 15:26:35.989806 4949 generic.go:334] "Generic (PLEG): container finished" podID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerID="0e4a123aaf82fcf5f55ab882041725ddfd86b53e7bd4d71e8b50344716c9c44c" exitCode=0 Jan 20 15:26:35 crc kubenswrapper[4949]: I0120 15:26:35.989891 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerDied","Data":"0e4a123aaf82fcf5f55ab882041725ddfd86b53e7bd4d71e8b50344716c9c44c"} Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.423328 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583071 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583316 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583392 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.583596 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") pod \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\" (UID: \"cb58fe7e-6a7d-46ea-82ad-02e9200e8042\") " Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.590191 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk" (OuterVolumeSpecName: "kube-api-access-7m7hk") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "kube-api-access-7m7hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.593202 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph" (OuterVolumeSpecName: "ceph") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.616578 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.634167 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory" (OuterVolumeSpecName: "inventory") pod "cb58fe7e-6a7d-46ea-82ad-02e9200e8042" (UID: "cb58fe7e-6a7d-46ea-82ad-02e9200e8042"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686320 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686365 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m7hk\" (UniqueName: \"kubernetes.io/projected/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-kube-api-access-7m7hk\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686379 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:37 crc kubenswrapper[4949]: I0120 15:26:37.686393 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb58fe7e-6a7d-46ea-82ad-02e9200e8042-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.011331 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" event={"ID":"cb58fe7e-6a7d-46ea-82ad-02e9200e8042","Type":"ContainerDied","Data":"ac480ed39815ab9b58878a6fa095a000358d36e2dcc4e6676d778165ddcc0f14"} Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.011670 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac480ed39815ab9b58878a6fa095a000358d36e2dcc4e6676d778165ddcc0f14" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.011412 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.086051 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f"] Jan 20 15:26:38 crc kubenswrapper[4949]: E0120 15:26:38.086830 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.086924 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.087160 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb58fe7e-6a7d-46ea-82ad-02e9200e8042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.087905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.090378 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.090880 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.091497 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.091918 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.101883 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.105752 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f"] Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195578 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195758 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.195812 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297340 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297410 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297471 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.297508 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.302180 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.302190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.302655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.318235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fqp9f\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.409140 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:26:38 crc kubenswrapper[4949]: I0120 15:26:38.943713 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f"] Jan 20 15:26:39 crc kubenswrapper[4949]: I0120 15:26:39.021378 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerStarted","Data":"240cc9c2fcb2aab50d26320ee84f531a2489e4545f8b419b6cde8ba6fab50b2b"} Jan 20 15:26:42 crc kubenswrapper[4949]: I0120 15:26:42.103685 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerStarted","Data":"dd42c64705c3db9fac7bb5ebb52eecbc1da2f31000c4d0150b5155e6d7154edf"} Jan 20 15:26:42 crc kubenswrapper[4949]: I0120 15:26:42.126766 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" podStartSLOduration=1.9454792589999998 podStartE2EDuration="4.126748583s" podCreationTimestamp="2026-01-20 15:26:38 +0000 UTC" firstStartedPulling="2026-01-20 15:26:38.953441809 +0000 UTC m=+2194.763272667" lastFinishedPulling="2026-01-20 15:26:41.134711133 +0000 UTC m=+2196.944541991" observedRunningTime="2026-01-20 15:26:42.125708551 +0000 UTC m=+2197.935539409" watchObservedRunningTime="2026-01-20 15:26:42.126748583 +0000 UTC m=+2197.936579441" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.152543 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.153444 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.153615 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.154893 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:26:57 crc kubenswrapper[4949]: I0120 15:26:57.154969 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" gracePeriod=600 Jan 20 15:26:57 crc kubenswrapper[4949]: E0120 15:26:57.281289 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.237707 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" exitCode=0 Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.237721 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223"} Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.238169 4949 scope.go:117] "RemoveContainer" containerID="50470942ed64eb0461ec232cf604912e413f448b18906befc70089603969353c" Jan 20 15:26:58 crc kubenswrapper[4949]: I0120 15:26:58.238989 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:26:58 crc kubenswrapper[4949]: E0120 15:26:58.239395 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:08 crc kubenswrapper[4949]: I0120 15:27:08.788581 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:08 crc kubenswrapper[4949]: E0120 15:27:08.789312 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:19 crc kubenswrapper[4949]: I0120 15:27:19.410672 4949 generic.go:334] "Generic (PLEG): container finished" podID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerID="dd42c64705c3db9fac7bb5ebb52eecbc1da2f31000c4d0150b5155e6d7154edf" exitCode=0 Jan 20 15:27:19 crc kubenswrapper[4949]: I0120 15:27:19.410706 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerDied","Data":"dd42c64705c3db9fac7bb5ebb52eecbc1da2f31000c4d0150b5155e6d7154edf"} Jan 20 15:27:20 crc kubenswrapper[4949]: I0120 15:27:20.791312 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:20 crc kubenswrapper[4949]: E0120 15:27:20.791853 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:20 crc kubenswrapper[4949]: I0120 15:27:20.927648 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062096 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062730 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.062878 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") pod \"a8ca811b-8738-49ed-b552-bdf38a5d5650\" (UID: \"a8ca811b-8738-49ed-b552-bdf38a5d5650\") " Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.067858 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph" (OuterVolumeSpecName: "ceph") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.070181 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt" (OuterVolumeSpecName: "kube-api-access-np8zt") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "kube-api-access-np8zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.096073 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory" (OuterVolumeSpecName: "inventory") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.133072 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8ca811b-8738-49ed-b552-bdf38a5d5650" (UID: "a8ca811b-8738-49ed-b552-bdf38a5d5650"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.165607 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.165867 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.165993 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8zt\" (UniqueName: \"kubernetes.io/projected/a8ca811b-8738-49ed-b552-bdf38a5d5650-kube-api-access-np8zt\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.166102 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8ca811b-8738-49ed-b552-bdf38a5d5650-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.427589 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" event={"ID":"a8ca811b-8738-49ed-b552-bdf38a5d5650","Type":"ContainerDied","Data":"240cc9c2fcb2aab50d26320ee84f531a2489e4545f8b419b6cde8ba6fab50b2b"} Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.427631 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240cc9c2fcb2aab50d26320ee84f531a2489e4545f8b419b6cde8ba6fab50b2b" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.427649 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fqp9f" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.530347 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv"] Jan 20 15:27:21 crc kubenswrapper[4949]: E0120 15:27:21.534314 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.534352 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.534857 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ca811b-8738-49ed-b552-bdf38a5d5650" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.535852 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.555720 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.555980 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.556158 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.556373 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.557296 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.570298 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv"] Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.574806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.574889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.574985 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.575038 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.676486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.677178 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.677243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.677324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.680781 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.680854 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.681243 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.700131 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:21 crc kubenswrapper[4949]: I0120 15:27:21.872024 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:22 crc kubenswrapper[4949]: I0120 15:27:22.422194 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv"] Jan 20 15:27:22 crc kubenswrapper[4949]: I0120 15:27:22.435176 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerStarted","Data":"03edd3d91f96ac4d80b00a7cb825470f16f375f292ba6c2d4685a36a627d3a4a"} Jan 20 15:27:24 crc kubenswrapper[4949]: I0120 15:27:24.450773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerStarted","Data":"868adfe83b8ff434ae85bc43209014db32f20152473b060d68f2881537174d9d"} Jan 20 15:27:24 crc kubenswrapper[4949]: I0120 15:27:24.479436 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" podStartSLOduration=2.568294271 podStartE2EDuration="3.479404469s" podCreationTimestamp="2026-01-20 15:27:21 +0000 UTC" firstStartedPulling="2026-01-20 15:27:22.425754579 +0000 UTC m=+2238.235585447" lastFinishedPulling="2026-01-20 15:27:23.336864787 +0000 UTC m=+2239.146695645" observedRunningTime="2026-01-20 15:27:24.470291055 +0000 UTC m=+2240.280121963" watchObservedRunningTime="2026-01-20 15:27:24.479404469 +0000 UTC m=+2240.289235367" Jan 20 15:27:28 crc kubenswrapper[4949]: I0120 15:27:28.486777 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerID="868adfe83b8ff434ae85bc43209014db32f20152473b060d68f2881537174d9d" exitCode=0 Jan 20 15:27:28 crc kubenswrapper[4949]: I0120 15:27:28.487255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerDied","Data":"868adfe83b8ff434ae85bc43209014db32f20152473b060d68f2881537174d9d"} Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.026851 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066417 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066561 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.066802 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") pod \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\" (UID: \"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb\") " Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.072583 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph" (OuterVolumeSpecName: "ceph") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.072923 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c" (OuterVolumeSpecName: "kube-api-access-67z6c") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "kube-api-access-67z6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.109830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory" (OuterVolumeSpecName: "inventory") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.111439 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" (UID: "9f5697b2-a2f0-4b5c-949a-0f52e9e39beb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168927 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168970 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168983 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.168991 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67z6c\" (UniqueName: \"kubernetes.io/projected/9f5697b2-a2f0-4b5c-949a-0f52e9e39beb-kube-api-access-67z6c\") on node \"crc\" DevicePath \"\"" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.505341 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" event={"ID":"9f5697b2-a2f0-4b5c-949a-0f52e9e39beb","Type":"ContainerDied","Data":"03edd3d91f96ac4d80b00a7cb825470f16f375f292ba6c2d4685a36a627d3a4a"} Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.505394 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03edd3d91f96ac4d80b00a7cb825470f16f375f292ba6c2d4685a36a627d3a4a" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.505423 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.581145 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc"] Jan 20 15:27:30 crc kubenswrapper[4949]: E0120 15:27:30.581675 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.581692 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.581896 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5697b2-a2f0-4b5c-949a-0f52e9e39beb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.582611 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.588759 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.588769 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.589099 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.589147 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.589188 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.591509 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc"] Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677830 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677897 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677942 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.677978 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779697 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779809 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.779852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.783594 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.783594 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.784778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.804242 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pscmc\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:30 crc kubenswrapper[4949]: I0120 15:27:30.948418 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:27:31 crc kubenswrapper[4949]: I0120 15:27:31.515442 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc"] Jan 20 15:27:32 crc kubenswrapper[4949]: I0120 15:27:32.521170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerStarted","Data":"890abcd23d68f02ea5ffe3182369199eb6f2925bc2abdb6de4c55f6d08ddb80b"} Jan 20 15:27:33 crc kubenswrapper[4949]: I0120 15:27:33.528628 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerStarted","Data":"f72a53123021c4be6a338015dab539da7dbe5be416a4ebc1b067696c9253b4af"} Jan 20 15:27:33 crc kubenswrapper[4949]: I0120 15:27:33.548530 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" podStartSLOduration=2.499596375 podStartE2EDuration="3.548501724s" podCreationTimestamp="2026-01-20 15:27:30 +0000 UTC" firstStartedPulling="2026-01-20 15:27:31.520848465 +0000 UTC m=+2247.330679323" lastFinishedPulling="2026-01-20 15:27:32.569753814 +0000 UTC m=+2248.379584672" observedRunningTime="2026-01-20 15:27:33.544885255 +0000 UTC m=+2249.354716113" watchObservedRunningTime="2026-01-20 15:27:33.548501724 +0000 UTC m=+2249.358332582" Jan 20 15:27:35 crc kubenswrapper[4949]: I0120 15:27:35.789371 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:35 crc kubenswrapper[4949]: E0120 15:27:35.790058 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:27:48 crc kubenswrapper[4949]: I0120 15:27:48.788984 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:27:48 crc kubenswrapper[4949]: E0120 15:27:48.789841 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:01 crc kubenswrapper[4949]: I0120 15:28:01.789313 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:01 crc kubenswrapper[4949]: E0120 15:28:01.790049 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:14 crc kubenswrapper[4949]: I0120 15:28:14.795488 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:14 crc kubenswrapper[4949]: E0120 15:28:14.796382 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:21 crc kubenswrapper[4949]: I0120 15:28:21.344586 4949 generic.go:334] "Generic (PLEG): container finished" podID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerID="f72a53123021c4be6a338015dab539da7dbe5be416a4ebc1b067696c9253b4af" exitCode=0 Jan 20 15:28:21 crc kubenswrapper[4949]: I0120 15:28:21.344686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerDied","Data":"f72a53123021c4be6a338015dab539da7dbe5be416a4ebc1b067696c9253b4af"} Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.704744 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816220 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816390 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.816485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") pod \"aa357e67-831a-4584-bf56-0c2e58d1aed8\" (UID: \"aa357e67-831a-4584-bf56-0c2e58d1aed8\") " Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.824702 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph" (OuterVolumeSpecName: "ceph") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.829974 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd" (OuterVolumeSpecName: "kube-api-access-mvgvd") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "kube-api-access-mvgvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.846344 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory" (OuterVolumeSpecName: "inventory") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.849804 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa357e67-831a-4584-bf56-0c2e58d1aed8" (UID: "aa357e67-831a-4584-bf56-0c2e58d1aed8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918112 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvgvd\" (UniqueName: \"kubernetes.io/projected/aa357e67-831a-4584-bf56-0c2e58d1aed8-kube-api-access-mvgvd\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918137 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918147 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:22 crc kubenswrapper[4949]: I0120 15:28:22.918155 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa357e67-831a-4584-bf56-0c2e58d1aed8-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.375484 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" event={"ID":"aa357e67-831a-4584-bf56-0c2e58d1aed8","Type":"ContainerDied","Data":"890abcd23d68f02ea5ffe3182369199eb6f2925bc2abdb6de4c55f6d08ddb80b"} Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.375863 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890abcd23d68f02ea5ffe3182369199eb6f2925bc2abdb6de4c55f6d08ddb80b" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.375595 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pscmc" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.477285 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6z8gd"] Jan 20 15:28:23 crc kubenswrapper[4949]: E0120 15:28:23.477819 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.477844 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.478123 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa357e67-831a-4584-bf56-0c2e58d1aed8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.478944 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.480886 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.481300 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.481783 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.481998 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.483699 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.487228 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6z8gd"] Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.527723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.528007 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.528122 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.528231 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.629931 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.630301 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.630356 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.630391 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.636715 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.636781 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.641046 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.651931 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"ssh-known-hosts-edpm-deployment-6z8gd\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:23 crc kubenswrapper[4949]: I0120 15:28:23.805563 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:24 crc kubenswrapper[4949]: I0120 15:28:24.345894 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6z8gd"] Jan 20 15:28:24 crc kubenswrapper[4949]: I0120 15:28:24.383801 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerStarted","Data":"974eb2867a71ee108a5247674defa3d98821634338938164dfe279741f7a9a70"} Jan 20 15:28:25 crc kubenswrapper[4949]: I0120 15:28:25.394315 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerStarted","Data":"36cdec4733e97efed0669b939674294e794d961eda5e6f7eafee57684a0680f7"} Jan 20 15:28:25 crc kubenswrapper[4949]: I0120 15:28:25.411262 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" podStartSLOduration=1.890981735 podStartE2EDuration="2.411248408s" podCreationTimestamp="2026-01-20 15:28:23 +0000 UTC" firstStartedPulling="2026-01-20 15:28:24.350107932 +0000 UTC m=+2300.159938790" lastFinishedPulling="2026-01-20 15:28:24.870374565 +0000 UTC m=+2300.680205463" observedRunningTime="2026-01-20 15:28:25.408346631 +0000 UTC m=+2301.218177489" watchObservedRunningTime="2026-01-20 15:28:25.411248408 +0000 UTC m=+2301.221079266" Jan 20 15:28:27 crc kubenswrapper[4949]: I0120 15:28:27.789430 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:27 crc kubenswrapper[4949]: E0120 15:28:27.790157 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.378148 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.382680 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.416866 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.466118 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.466311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.466347 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.567567 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.567706 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.567730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.568335 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.568510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.590027 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"certified-operators-8w6nr\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:29 crc kubenswrapper[4949]: I0120 15:28:29.704765 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:30 crc kubenswrapper[4949]: I0120 15:28:30.312659 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:30 crc kubenswrapper[4949]: I0120 15:28:30.453127 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerStarted","Data":"653caeb758c7ac046112bc5240ea531cc0b62948a13a3b02070006b005805b4c"} Jan 20 15:28:31 crc kubenswrapper[4949]: I0120 15:28:31.462050 4949 generic.go:334] "Generic (PLEG): container finished" podID="a0beddb2-34aa-4859-b114-03e9876f9722" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" exitCode=0 Jan 20 15:28:31 crc kubenswrapper[4949]: I0120 15:28:31.462115 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa"} Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.474116 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerStarted","Data":"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43"} Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.930249 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.931952 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:32 crc kubenswrapper[4949]: I0120 15:28:32.958286 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.040196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.040373 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.040484 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.142766 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.142918 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.142987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.143686 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.143755 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.169902 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"redhat-operators-fltdx\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.259060 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.484571 4949 generic.go:334] "Generic (PLEG): container finished" podID="a0beddb2-34aa-4859-b114-03e9876f9722" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" exitCode=0 Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.484602 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43"} Jan 20 15:28:33 crc kubenswrapper[4949]: I0120 15:28:33.776486 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:33 crc kubenswrapper[4949]: W0120 15:28:33.783366 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06d1d4bb_9d56_4ee1_8afb_bedaedd08a16.slice/crio-aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae WatchSource:0}: Error finding container aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae: Status 404 returned error can't find the container with id aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae Jan 20 15:28:34 crc kubenswrapper[4949]: I0120 15:28:34.503350 4949 generic.go:334] "Generic (PLEG): container finished" podID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" exitCode=0 Jan 20 15:28:34 crc kubenswrapper[4949]: I0120 15:28:34.503422 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085"} Jan 20 15:28:34 crc kubenswrapper[4949]: I0120 15:28:34.503703 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerStarted","Data":"aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae"} Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.517306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerStarted","Data":"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31"} Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.519791 4949 generic.go:334] "Generic (PLEG): container finished" podID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerID="36cdec4733e97efed0669b939674294e794d961eda5e6f7eafee57684a0680f7" exitCode=0 Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.519857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerDied","Data":"36cdec4733e97efed0669b939674294e794d961eda5e6f7eafee57684a0680f7"} Jan 20 15:28:35 crc kubenswrapper[4949]: I0120 15:28:35.546681 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8w6nr" podStartSLOduration=3.723345026 podStartE2EDuration="6.546654874s" podCreationTimestamp="2026-01-20 15:28:29 +0000 UTC" firstStartedPulling="2026-01-20 15:28:31.465997505 +0000 UTC m=+2307.275828363" lastFinishedPulling="2026-01-20 15:28:34.289307353 +0000 UTC m=+2310.099138211" observedRunningTime="2026-01-20 15:28:35.543617973 +0000 UTC m=+2311.353448891" watchObservedRunningTime="2026-01-20 15:28:35.546654874 +0000 UTC m=+2311.356485752" Jan 20 15:28:36 crc kubenswrapper[4949]: I0120 15:28:36.534234 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerStarted","Data":"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4"} Jan 20 15:28:36 crc kubenswrapper[4949]: I0120 15:28:36.923126 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016214 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016310 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.016380 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") pod \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\" (UID: \"53b63ff2-c70c-4429-99c6-759d0eb33ae9\") " Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.021762 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph" (OuterVolumeSpecName: "ceph") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.021910 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk" (OuterVolumeSpecName: "kube-api-access-xrmhk") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "kube-api-access-xrmhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.045483 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.049692 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "53b63ff2-c70c-4429-99c6-759d0eb33ae9" (UID: "53b63ff2-c70c-4429-99c6-759d0eb33ae9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117642 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117682 4949 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117701 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrmhk\" (UniqueName: \"kubernetes.io/projected/53b63ff2-c70c-4429-99c6-759d0eb33ae9-kube-api-access-xrmhk\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.117714 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53b63ff2-c70c-4429-99c6-759d0eb33ae9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.542344 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" event={"ID":"53b63ff2-c70c-4429-99c6-759d0eb33ae9","Type":"ContainerDied","Data":"974eb2867a71ee108a5247674defa3d98821634338938164dfe279741f7a9a70"} Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.542395 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974eb2867a71ee108a5247674defa3d98821634338938164dfe279741f7a9a70" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.542427 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6z8gd" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.675447 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d"] Jan 20 15:28:37 crc kubenswrapper[4949]: E0120 15:28:37.675877 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.675893 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.676055 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b63ff2-c70c-4429-99c6-759d0eb33ae9" containerName="ssh-known-hosts-edpm-deployment" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.676708 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.679596 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.679854 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.694926 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.694933 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.695116 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.695245 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d"] Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742265 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742335 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.742432 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.844091 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.844191 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.844963 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.845062 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.849593 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.851904 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.860923 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:37 crc kubenswrapper[4949]: I0120 15:28:37.864446 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bdp7d\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.003626 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.398325 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d"] Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.407644 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.550128 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerStarted","Data":"932cd2d103f15668ad771486432ec076817ee64314c5dd52315bdac5cf51d072"} Jan 20 15:28:38 crc kubenswrapper[4949]: I0120 15:28:38.789409 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:38 crc kubenswrapper[4949]: E0120 15:28:38.789999 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.566204 4949 generic.go:334] "Generic (PLEG): container finished" podID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" exitCode=0 Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.566306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4"} Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.705430 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.705508 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:39 crc kubenswrapper[4949]: I0120 15:28:39.757771 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:40 crc kubenswrapper[4949]: I0120 15:28:40.626089 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:40 crc kubenswrapper[4949]: I0120 15:28:40.921639 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.586792 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerStarted","Data":"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1"} Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.588442 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerStarted","Data":"157d212863df21d0aa58c275f8ef17f4d8b9442b2f3e882fbaf57d05388f3ce4"} Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.611494 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fltdx" podStartSLOduration=3.141466698 podStartE2EDuration="9.611454684s" podCreationTimestamp="2026-01-20 15:28:32 +0000 UTC" firstStartedPulling="2026-01-20 15:28:34.504945792 +0000 UTC m=+2310.314776660" lastFinishedPulling="2026-01-20 15:28:40.974933788 +0000 UTC m=+2316.784764646" observedRunningTime="2026-01-20 15:28:41.605186905 +0000 UTC m=+2317.415017773" watchObservedRunningTime="2026-01-20 15:28:41.611454684 +0000 UTC m=+2317.421285542" Jan 20 15:28:41 crc kubenswrapper[4949]: I0120 15:28:41.627583 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" podStartSLOduration=2.234628803 podStartE2EDuration="4.627564298s" podCreationTimestamp="2026-01-20 15:28:37 +0000 UTC" firstStartedPulling="2026-01-20 15:28:38.407339104 +0000 UTC m=+2314.217169962" lastFinishedPulling="2026-01-20 15:28:40.800274559 +0000 UTC m=+2316.610105457" observedRunningTime="2026-01-20 15:28:41.62232351 +0000 UTC m=+2317.432154368" watchObservedRunningTime="2026-01-20 15:28:41.627564298 +0000 UTC m=+2317.437395156" Jan 20 15:28:42 crc kubenswrapper[4949]: I0120 15:28:42.595836 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8w6nr" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" containerID="cri-o://db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" gracePeriod=2 Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.065874 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159076 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") pod \"a0beddb2-34aa-4859-b114-03e9876f9722\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159275 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") pod \"a0beddb2-34aa-4859-b114-03e9876f9722\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159356 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") pod \"a0beddb2-34aa-4859-b114-03e9876f9722\" (UID: \"a0beddb2-34aa-4859-b114-03e9876f9722\") " Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.159836 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities" (OuterVolumeSpecName: "utilities") pod "a0beddb2-34aa-4859-b114-03e9876f9722" (UID: "a0beddb2-34aa-4859-b114-03e9876f9722"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.165268 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd" (OuterVolumeSpecName: "kube-api-access-nzhxd") pod "a0beddb2-34aa-4859-b114-03e9876f9722" (UID: "a0beddb2-34aa-4859-b114-03e9876f9722"). InnerVolumeSpecName "kube-api-access-nzhxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.206027 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0beddb2-34aa-4859-b114-03e9876f9722" (UID: "a0beddb2-34aa-4859-b114-03e9876f9722"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.259915 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.260670 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.261198 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.261215 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0beddb2-34aa-4859-b114-03e9876f9722-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.261225 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzhxd\" (UniqueName: \"kubernetes.io/projected/a0beddb2-34aa-4859-b114-03e9876f9722-kube-api-access-nzhxd\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605599 4949 generic.go:334] "Generic (PLEG): container finished" podID="a0beddb2-34aa-4859-b114-03e9876f9722" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" exitCode=0 Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605659 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w6nr" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605703 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31"} Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w6nr" event={"ID":"a0beddb2-34aa-4859-b114-03e9876f9722","Type":"ContainerDied","Data":"653caeb758c7ac046112bc5240ea531cc0b62948a13a3b02070006b005805b4c"} Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.605792 4949 scope.go:117] "RemoveContainer" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.651094 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.660040 4949 scope.go:117] "RemoveContainer" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.662048 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8w6nr"] Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.690852 4949 scope.go:117] "RemoveContainer" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.735262 4949 scope.go:117] "RemoveContainer" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" Jan 20 15:28:43 crc kubenswrapper[4949]: E0120 15:28:43.735803 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31\": container with ID starting with db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31 not found: ID does not exist" containerID="db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.735833 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31"} err="failed to get container status \"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31\": rpc error: code = NotFound desc = could not find container \"db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31\": container with ID starting with db8f698b0d5fbd0613073bf2fd36d0088bb3a2687671724814fd39e9cc205b31 not found: ID does not exist" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.735853 4949 scope.go:117] "RemoveContainer" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" Jan 20 15:28:43 crc kubenswrapper[4949]: E0120 15:28:43.736219 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43\": container with ID starting with 035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43 not found: ID does not exist" containerID="035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.736341 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43"} err="failed to get container status \"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43\": rpc error: code = NotFound desc = could not find container \"035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43\": container with ID starting with 035f5d17e8185775e16be311a07c614f6423a8a5a4f785d79fc33bb1053c4a43 not found: ID does not exist" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.736480 4949 scope.go:117] "RemoveContainer" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" Jan 20 15:28:43 crc kubenswrapper[4949]: E0120 15:28:43.736992 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa\": container with ID starting with 1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa not found: ID does not exist" containerID="1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa" Jan 20 15:28:43 crc kubenswrapper[4949]: I0120 15:28:43.737022 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa"} err="failed to get container status \"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa\": rpc error: code = NotFound desc = could not find container \"1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa\": container with ID starting with 1e461e34e39f78061b7954826f5eb68b4febaf428f50327452bc3d96176d71fa not found: ID does not exist" Jan 20 15:28:44 crc kubenswrapper[4949]: I0120 15:28:44.344333 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fltdx" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" probeResult="failure" output=< Jan 20 15:28:44 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:28:44 crc kubenswrapper[4949]: > Jan 20 15:28:44 crc kubenswrapper[4949]: I0120 15:28:44.801924 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" path="/var/lib/kubelet/pods/a0beddb2-34aa-4859-b114-03e9876f9722/volumes" Jan 20 15:28:48 crc kubenswrapper[4949]: I0120 15:28:48.650941 4949 generic.go:334] "Generic (PLEG): container finished" podID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerID="157d212863df21d0aa58c275f8ef17f4d8b9442b2f3e882fbaf57d05388f3ce4" exitCode=0 Jan 20 15:28:48 crc kubenswrapper[4949]: I0120 15:28:48.651041 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerDied","Data":"157d212863df21d0aa58c275f8ef17f4d8b9442b2f3e882fbaf57d05388f3ce4"} Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.070239 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233043 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233085 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233138 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.233337 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") pod \"4d06892f-967c-4bd9-ac54-c36c80e3df73\" (UID: \"4d06892f-967c-4bd9-ac54-c36c80e3df73\") " Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.239496 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4" (OuterVolumeSpecName: "kube-api-access-9crq4") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "kube-api-access-9crq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.241231 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph" (OuterVolumeSpecName: "ceph") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.264380 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.269204 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory" (OuterVolumeSpecName: "inventory") pod "4d06892f-967c-4bd9-ac54-c36c80e3df73" (UID: "4d06892f-967c-4bd9-ac54-c36c80e3df73"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335827 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335885 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335965 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d06892f-967c-4bd9-ac54-c36c80e3df73-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.335978 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crq4\" (UniqueName: \"kubernetes.io/projected/4d06892f-967c-4bd9-ac54-c36c80e3df73-kube-api-access-9crq4\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.670155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" event={"ID":"4d06892f-967c-4bd9-ac54-c36c80e3df73","Type":"ContainerDied","Data":"932cd2d103f15668ad771486432ec076817ee64314c5dd52315bdac5cf51d072"} Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.670499 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="932cd2d103f15668ad771486432ec076817ee64314c5dd52315bdac5cf51d072" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.670260 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bdp7d" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.794306 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.794552 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.829606 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5"] Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830180 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830206 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830221 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-utilities" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830231 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-utilities" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830266 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-content" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830276 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="extract-content" Jan 20 15:28:50 crc kubenswrapper[4949]: E0120 15:28:50.830299 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830308 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830546 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d06892f-967c-4bd9-ac54-c36c80e3df73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.830595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0beddb2-34aa-4859-b114-03e9876f9722" containerName="registry-server" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.831425 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834391 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834671 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5"] Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834732 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.834869 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.836486 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.836834 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957284 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957668 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:50 crc kubenswrapper[4949]: I0120 15:28:50.957742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.059870 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.059928 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.059999 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.060019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.064510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.064655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.066242 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.076044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.155029 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:28:51 crc kubenswrapper[4949]: I0120 15:28:51.698978 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5"] Jan 20 15:28:52 crc kubenswrapper[4949]: I0120 15:28:52.693252 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerStarted","Data":"2bb7da0c357aafa5653293ede63d63a9faed2b781c862a009a44845623b5b8a0"} Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.340050 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.390680 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.586488 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.704269 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerStarted","Data":"06f7ca42c9bf56fefceddad7c492908989c919da063ce926c55d53dece166f11"} Jan 20 15:28:53 crc kubenswrapper[4949]: I0120 15:28:53.727891 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" podStartSLOduration=2.870842414 podStartE2EDuration="3.727874527s" podCreationTimestamp="2026-01-20 15:28:50 +0000 UTC" firstStartedPulling="2026-01-20 15:28:51.705123497 +0000 UTC m=+2327.514954355" lastFinishedPulling="2026-01-20 15:28:52.56215561 +0000 UTC m=+2328.371986468" observedRunningTime="2026-01-20 15:28:53.723843837 +0000 UTC m=+2329.533674735" watchObservedRunningTime="2026-01-20 15:28:53.727874527 +0000 UTC m=+2329.537705385" Jan 20 15:28:54 crc kubenswrapper[4949]: I0120 15:28:54.711508 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fltdx" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" containerID="cri-o://87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" gracePeriod=2 Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.302012 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.432973 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") pod \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.433136 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") pod \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.433221 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") pod \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\" (UID: \"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16\") " Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.433886 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities" (OuterVolumeSpecName: "utilities") pod "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" (UID: "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.440816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5" (OuterVolumeSpecName: "kube-api-access-5qsb5") pod "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" (UID: "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16"). InnerVolumeSpecName "kube-api-access-5qsb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.537420 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.537456 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qsb5\" (UniqueName: \"kubernetes.io/projected/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-kube-api-access-5qsb5\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.556489 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" (UID: "06d1d4bb-9d56-4ee1-8afb-bedaedd08a16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.639756 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719772 4949 generic.go:334] "Generic (PLEG): container finished" podID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" exitCode=0 Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719813 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1"} Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719840 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fltdx" event={"ID":"06d1d4bb-9d56-4ee1-8afb-bedaedd08a16","Type":"ContainerDied","Data":"aea8b924f1ead7844d8d8ecffdefeccfe2139f5e1dd98ef99c336a3edb0451ae"} Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719857 4949 scope.go:117] "RemoveContainer" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.719854 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fltdx" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.745458 4949 scope.go:117] "RemoveContainer" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.756856 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.768341 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fltdx"] Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.785917 4949 scope.go:117] "RemoveContainer" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.809935 4949 scope.go:117] "RemoveContainer" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" Jan 20 15:28:55 crc kubenswrapper[4949]: E0120 15:28:55.810460 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1\": container with ID starting with 87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1 not found: ID does not exist" containerID="87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.810498 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1"} err="failed to get container status \"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1\": rpc error: code = NotFound desc = could not find container \"87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1\": container with ID starting with 87efde926dd2a98479bc76a340b75652b6214cf917bc9cba48a56760e04a29f1 not found: ID does not exist" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.810536 4949 scope.go:117] "RemoveContainer" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" Jan 20 15:28:55 crc kubenswrapper[4949]: E0120 15:28:55.811092 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4\": container with ID starting with fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4 not found: ID does not exist" containerID="fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.811139 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4"} err="failed to get container status \"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4\": rpc error: code = NotFound desc = could not find container \"fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4\": container with ID starting with fd8a33d7755be2332ff55edbf3b10080779de91c1b2bccd8231d5ba0875ebcc4 not found: ID does not exist" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.811170 4949 scope.go:117] "RemoveContainer" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" Jan 20 15:28:55 crc kubenswrapper[4949]: E0120 15:28:55.811567 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085\": container with ID starting with 5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085 not found: ID does not exist" containerID="5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085" Jan 20 15:28:55 crc kubenswrapper[4949]: I0120 15:28:55.811598 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085"} err="failed to get container status \"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085\": rpc error: code = NotFound desc = could not find container \"5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085\": container with ID starting with 5af056b49b5684d3b1c45f27f68a6600b8a78836ab9466654acd30193dc1d085 not found: ID does not exist" Jan 20 15:28:56 crc kubenswrapper[4949]: I0120 15:28:56.798763 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" path="/var/lib/kubelet/pods/06d1d4bb-9d56-4ee1-8afb-bedaedd08a16/volumes" Jan 20 15:29:01 crc kubenswrapper[4949]: I0120 15:29:01.788692 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:01 crc kubenswrapper[4949]: E0120 15:29:01.789629 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:02 crc kubenswrapper[4949]: I0120 15:29:02.796954 4949 generic.go:334] "Generic (PLEG): container finished" podID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerID="06f7ca42c9bf56fefceddad7c492908989c919da063ce926c55d53dece166f11" exitCode=0 Jan 20 15:29:02 crc kubenswrapper[4949]: I0120 15:29:02.809443 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerDied","Data":"06f7ca42c9bf56fefceddad7c492908989c919da063ce926c55d53dece166f11"} Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.240873 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313207 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313330 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313508 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.313542 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") pod \"3b31ae29-db74-4104-b8b5-377bfa3f766a\" (UID: \"3b31ae29-db74-4104-b8b5-377bfa3f766a\") " Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.321368 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn" (OuterVolumeSpecName: "kube-api-access-2gvbn") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "kube-api-access-2gvbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.323852 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph" (OuterVolumeSpecName: "ceph") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.346856 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.354161 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory" (OuterVolumeSpecName: "inventory") pod "3b31ae29-db74-4104-b8b5-377bfa3f766a" (UID: "3b31ae29-db74-4104-b8b5-377bfa3f766a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416100 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416149 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416167 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b31ae29-db74-4104-b8b5-377bfa3f766a-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.416178 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gvbn\" (UniqueName: \"kubernetes.io/projected/3b31ae29-db74-4104-b8b5-377bfa3f766a-kube-api-access-2gvbn\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.823939 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" event={"ID":"3b31ae29-db74-4104-b8b5-377bfa3f766a","Type":"ContainerDied","Data":"2bb7da0c357aafa5653293ede63d63a9faed2b781c862a009a44845623b5b8a0"} Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.824209 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bb7da0c357aafa5653293ede63d63a9faed2b781c862a009a44845623b5b8a0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.824082 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.922271 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb"] Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.922880 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.922950 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.923031 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-utilities" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923090 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-utilities" Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.923149 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-content" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923199 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="extract-content" Jan 20 15:29:04 crc kubenswrapper[4949]: E0120 15:29:04.923266 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923336 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b31ae29-db74-4104-b8b5-377bfa3f766a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.923673 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d1d4bb-9d56-4ee1-8afb-bedaedd08a16" containerName="registry-server" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.924342 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927380 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927378 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927723 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927739 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927844 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927909 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.927969 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.928443 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 20 15:29:04 crc kubenswrapper[4949]: I0120 15:29:04.933351 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb"] Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.024896 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.024976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025148 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025251 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.025943 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026068 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026169 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026348 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026417 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.026598 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.127721 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128128 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.128896 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.129146 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.129218 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.129346 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.130262 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131161 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.131612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.132128 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.134382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.134755 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.135391 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.135877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.136583 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.137048 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.137403 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.137410 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.138347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.139993 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.143870 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.150089 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.240080 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:05 crc kubenswrapper[4949]: I0120 15:29:05.824789 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb"] Jan 20 15:29:06 crc kubenswrapper[4949]: I0120 15:29:06.847968 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerStarted","Data":"2552adbd9467a9c5b9482358b1855362c05f66b637e8f2f5b2886982d235b0b5"} Jan 20 15:29:07 crc kubenswrapper[4949]: I0120 15:29:07.857205 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerStarted","Data":"d83d45895b8ab8e4573ba4fee2d94606703a13c2a6437f78e9cc3391c8ec8859"} Jan 20 15:29:07 crc kubenswrapper[4949]: I0120 15:29:07.878839 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" podStartSLOduration=2.828716324 podStartE2EDuration="3.87882253s" podCreationTimestamp="2026-01-20 15:29:04 +0000 UTC" firstStartedPulling="2026-01-20 15:29:05.841919418 +0000 UTC m=+2341.651750276" lastFinishedPulling="2026-01-20 15:29:06.892025624 +0000 UTC m=+2342.701856482" observedRunningTime="2026-01-20 15:29:07.875553448 +0000 UTC m=+2343.685384316" watchObservedRunningTime="2026-01-20 15:29:07.87882253 +0000 UTC m=+2343.688653388" Jan 20 15:29:12 crc kubenswrapper[4949]: I0120 15:29:12.789755 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:12 crc kubenswrapper[4949]: E0120 15:29:12.790669 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:25 crc kubenswrapper[4949]: I0120 15:29:25.789692 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:25 crc kubenswrapper[4949]: E0120 15:29:25.790633 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:37 crc kubenswrapper[4949]: I0120 15:29:37.789133 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:37 crc kubenswrapper[4949]: E0120 15:29:37.789867 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:42 crc kubenswrapper[4949]: I0120 15:29:42.176779 4949 generic.go:334] "Generic (PLEG): container finished" podID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerID="d83d45895b8ab8e4573ba4fee2d94606703a13c2a6437f78e9cc3391c8ec8859" exitCode=0 Jan 20 15:29:42 crc kubenswrapper[4949]: I0120 15:29:42.176856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerDied","Data":"d83d45895b8ab8e4573ba4fee2d94606703a13c2a6437f78e9cc3391c8ec8859"} Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.629358 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690461 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690573 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690638 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690676 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690759 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690782 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690842 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.690897 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.691880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.691948 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.691975 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.692061 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.692143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\" (UID: \"d1ff69ad-f42e-4882-a580-c2fc212ab3a4\") " Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.700323 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.700406 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms" (OuterVolumeSpecName: "kube-api-access-9vbms") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "kube-api-access-9vbms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701322 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701396 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701541 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.701805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.702142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph" (OuterVolumeSpecName: "ceph") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.702668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.703564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.703675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.704848 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.730842 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.736805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory" (OuterVolumeSpecName: "inventory") pod "d1ff69ad-f42e-4882-a580-c2fc212ab3a4" (UID: "d1ff69ad-f42e-4882-a580-c2fc212ab3a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795149 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795209 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795232 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795252 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795295 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795314 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795333 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795353 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795373 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795391 4949 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795409 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vbms\" (UniqueName: \"kubernetes.io/projected/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-kube-api-access-9vbms\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795426 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:43 crc kubenswrapper[4949]: I0120 15:29:43.795445 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ff69ad-f42e-4882-a580-c2fc212ab3a4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.198901 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" event={"ID":"d1ff69ad-f42e-4882-a580-c2fc212ab3a4","Type":"ContainerDied","Data":"2552adbd9467a9c5b9482358b1855362c05f66b637e8f2f5b2886982d235b0b5"} Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.198951 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2552adbd9467a9c5b9482358b1855362c05f66b637e8f2f5b2886982d235b0b5" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.198995 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.303227 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625"] Jan 20 15:29:44 crc kubenswrapper[4949]: E0120 15:29:44.303922 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.303964 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.304314 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ff69ad-f42e-4882-a580-c2fc212ab3a4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.305177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.309860 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.310241 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.310899 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.311441 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.313956 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.321847 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625"] Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.407903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.408021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.408067 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.408094 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510318 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510353 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.510375 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.514957 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.515128 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.516480 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.530054 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-tp625\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.621394 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:44 crc kubenswrapper[4949]: I0120 15:29:44.971923 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625"] Jan 20 15:29:45 crc kubenswrapper[4949]: I0120 15:29:45.210862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerStarted","Data":"1375055bad8df9d88a242faa9e275ca7ec0737f787a1baa1b269db756c3652cb"} Jan 20 15:29:46 crc kubenswrapper[4949]: I0120 15:29:46.221477 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerStarted","Data":"c174525433185192139fe513e1549a2527a5a51df56fbddd77ff87d5abc98489"} Jan 20 15:29:46 crc kubenswrapper[4949]: I0120 15:29:46.251893 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" podStartSLOduration=1.630685105 podStartE2EDuration="2.251859772s" podCreationTimestamp="2026-01-20 15:29:44 +0000 UTC" firstStartedPulling="2026-01-20 15:29:44.98407967 +0000 UTC m=+2380.793910528" lastFinishedPulling="2026-01-20 15:29:45.605254317 +0000 UTC m=+2381.415085195" observedRunningTime="2026-01-20 15:29:46.241240624 +0000 UTC m=+2382.051071502" watchObservedRunningTime="2026-01-20 15:29:46.251859772 +0000 UTC m=+2382.061690630" Jan 20 15:29:50 crc kubenswrapper[4949]: I0120 15:29:50.790138 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:29:50 crc kubenswrapper[4949]: E0120 15:29:50.790844 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:29:51 crc kubenswrapper[4949]: I0120 15:29:51.266863 4949 generic.go:334] "Generic (PLEG): container finished" podID="70d9d029-15fb-479a-b668-926d3167b179" containerID="c174525433185192139fe513e1549a2527a5a51df56fbddd77ff87d5abc98489" exitCode=0 Jan 20 15:29:51 crc kubenswrapper[4949]: I0120 15:29:51.266962 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerDied","Data":"c174525433185192139fe513e1549a2527a5a51df56fbddd77ff87d5abc98489"} Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.671470 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775483 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775676 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.775746 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") pod \"70d9d029-15fb-479a-b668-926d3167b179\" (UID: \"70d9d029-15fb-479a-b668-926d3167b179\") " Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.792267 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r" (OuterVolumeSpecName: "kube-api-access-ldn5r") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "kube-api-access-ldn5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.792490 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph" (OuterVolumeSpecName: "ceph") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.804070 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory" (OuterVolumeSpecName: "inventory") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.811580 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70d9d029-15fb-479a-b668-926d3167b179" (UID: "70d9d029-15fb-479a-b668-926d3167b179"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878018 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878064 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldn5r\" (UniqueName: \"kubernetes.io/projected/70d9d029-15fb-479a-b668-926d3167b179-kube-api-access-ldn5r\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878078 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:52 crc kubenswrapper[4949]: I0120 15:29:52.878089 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70d9d029-15fb-479a-b668-926d3167b179-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.295859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" event={"ID":"70d9d029-15fb-479a-b668-926d3167b179","Type":"ContainerDied","Data":"1375055bad8df9d88a242faa9e275ca7ec0737f787a1baa1b269db756c3652cb"} Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.295899 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-tp625" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.295919 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1375055bad8df9d88a242faa9e275ca7ec0737f787a1baa1b269db756c3652cb" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.444156 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g"] Jan 20 15:29:53 crc kubenswrapper[4949]: E0120 15:29:53.444836 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d9d029-15fb-479a-b668-926d3167b179" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.444852 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d9d029-15fb-479a-b668-926d3167b179" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.445030 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d9d029-15fb-479a-b668-926d3167b179" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.445585 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.447711 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.447927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.447989 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.448708 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.458409 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.460256 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.482644 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g"] Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591319 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591375 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591418 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591445 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.591605 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.592011 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693295 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693450 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693490 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693568 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693603 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.693640 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.696088 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.699561 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.699991 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.700213 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.701937 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.711013 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7j58g\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:53 crc kubenswrapper[4949]: I0120 15:29:53.781625 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:29:54 crc kubenswrapper[4949]: I0120 15:29:54.287669 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g"] Jan 20 15:29:54 crc kubenswrapper[4949]: I0120 15:29:54.313154 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerStarted","Data":"9536fad1bacc9034cf76c961a09438499cdeeda973474821cb5591e72cc72834"} Jan 20 15:29:56 crc kubenswrapper[4949]: I0120 15:29:56.329891 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerStarted","Data":"3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787"} Jan 20 15:29:56 crc kubenswrapper[4949]: I0120 15:29:56.356465 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" podStartSLOduration=1.999779437 podStartE2EDuration="3.356446391s" podCreationTimestamp="2026-01-20 15:29:53 +0000 UTC" firstStartedPulling="2026-01-20 15:29:54.294666031 +0000 UTC m=+2390.104496889" lastFinishedPulling="2026-01-20 15:29:55.651332945 +0000 UTC m=+2391.461163843" observedRunningTime="2026-01-20 15:29:56.350231299 +0000 UTC m=+2392.160062157" watchObservedRunningTime="2026-01-20 15:29:56.356446391 +0000 UTC m=+2392.166277249" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.140312 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf"] Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.142118 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.144723 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.146806 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.160932 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf"] Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.323044 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.323145 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.323176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.425819 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.425929 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.425965 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.427455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.435127 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.445267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"collect-profiles-29482050-knqrf\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.480706 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:00 crc kubenswrapper[4949]: I0120 15:30:00.917415 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf"] Jan 20 15:30:01 crc kubenswrapper[4949]: I0120 15:30:01.374366 4949 generic.go:334] "Generic (PLEG): container finished" podID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerID="88f113a7b6fc797eb7fe514267bced55439c2fafac70f54e2c53c63c02e7a5c5" exitCode=0 Jan 20 15:30:01 crc kubenswrapper[4949]: I0120 15:30:01.374408 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" event={"ID":"a4f7e2a1-deca-4c82-928a-bab4bd7d6620","Type":"ContainerDied","Data":"88f113a7b6fc797eb7fe514267bced55439c2fafac70f54e2c53c63c02e7a5c5"} Jan 20 15:30:01 crc kubenswrapper[4949]: I0120 15:30:01.374457 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" event={"ID":"a4f7e2a1-deca-4c82-928a-bab4bd7d6620","Type":"ContainerStarted","Data":"c6737d5d40de6f9890a4c013ae19fa94b25faa296b568ba4a82dd3dc2c9ab5a0"} Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.733237 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.874420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") pod \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.874488 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") pod \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.874658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") pod \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\" (UID: \"a4f7e2a1-deca-4c82-928a-bab4bd7d6620\") " Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.875347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4f7e2a1-deca-4c82-928a-bab4bd7d6620" (UID: "a4f7e2a1-deca-4c82-928a-bab4bd7d6620"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.875623 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.880695 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4f7e2a1-deca-4c82-928a-bab4bd7d6620" (UID: "a4f7e2a1-deca-4c82-928a-bab4bd7d6620"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.880775 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2" (OuterVolumeSpecName: "kube-api-access-df6b2") pod "a4f7e2a1-deca-4c82-928a-bab4bd7d6620" (UID: "a4f7e2a1-deca-4c82-928a-bab4bd7d6620"). InnerVolumeSpecName "kube-api-access-df6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.977161 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:30:02 crc kubenswrapper[4949]: I0120 15:30:02.977214 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df6b2\" (UniqueName: \"kubernetes.io/projected/a4f7e2a1-deca-4c82-928a-bab4bd7d6620-kube-api-access-df6b2\") on node \"crc\" DevicePath \"\"" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.392573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" event={"ID":"a4f7e2a1-deca-4c82-928a-bab4bd7d6620","Type":"ContainerDied","Data":"c6737d5d40de6f9890a4c013ae19fa94b25faa296b568ba4a82dd3dc2c9ab5a0"} Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.392900 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6737d5d40de6f9890a4c013ae19fa94b25faa296b568ba4a82dd3dc2c9ab5a0" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.392648 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482050-knqrf" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.790582 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:03 crc kubenswrapper[4949]: E0120 15:30:03.790825 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.814095 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 15:30:03 crc kubenswrapper[4949]: I0120 15:30:03.822668 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482005-wzsk7"] Jan 20 15:30:04 crc kubenswrapper[4949]: I0120 15:30:04.800442 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c06ab34-4b4e-4047-b32d-e9d36c792b1d" path="/var/lib/kubelet/pods/8c06ab34-4b4e-4047-b32d-e9d36c792b1d/volumes" Jan 20 15:30:15 crc kubenswrapper[4949]: I0120 15:30:15.788910 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:15 crc kubenswrapper[4949]: E0120 15:30:15.789751 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:18 crc kubenswrapper[4949]: I0120 15:30:18.298455 4949 scope.go:117] "RemoveContainer" containerID="f7ccf61b1b533eee3af51392be86e3fc038d228c29c868fd9df44638391dd3bf" Jan 20 15:30:28 crc kubenswrapper[4949]: I0120 15:30:28.789475 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:28 crc kubenswrapper[4949]: E0120 15:30:28.790253 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:42 crc kubenswrapper[4949]: I0120 15:30:42.789675 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:42 crc kubenswrapper[4949]: E0120 15:30:42.790488 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:30:56 crc kubenswrapper[4949]: I0120 15:30:56.789743 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:30:56 crc kubenswrapper[4949]: E0120 15:30:56.790631 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:11 crc kubenswrapper[4949]: I0120 15:31:11.789353 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:11 crc kubenswrapper[4949]: E0120 15:31:11.790018 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:13 crc kubenswrapper[4949]: E0120 15:31:13.307565 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb1d8e10_2c84_4a8f_a3d0_653432297fb1.slice/crio-conmon-3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787.scope\": RecentStats: unable to find data in memory cache]" Jan 20 15:31:14 crc kubenswrapper[4949]: I0120 15:31:14.040458 4949 generic.go:334] "Generic (PLEG): container finished" podID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerID="3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787" exitCode=0 Jan 20 15:31:14 crc kubenswrapper[4949]: I0120 15:31:14.040739 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerDied","Data":"3152a6154b59e0b2b5b0abe67975843fee7ee71b269eab67e69e5bcf08cc4787"} Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.421287 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553346 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553603 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553634 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553672 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.553708 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") pod \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\" (UID: \"eb1d8e10-2c84-4a8f-a3d0-653432297fb1\") " Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.560961 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.562162 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph" (OuterVolumeSpecName: "ceph") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.564429 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs" (OuterVolumeSpecName: "kube-api-access-4mhrs") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "kube-api-access-4mhrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.585856 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory" (OuterVolumeSpecName: "inventory") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.586449 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.596788 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eb1d8e10-2c84-4a8f-a3d0-653432297fb1" (UID: "eb1d8e10-2c84-4a8f-a3d0-653432297fb1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656165 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mhrs\" (UniqueName: \"kubernetes.io/projected/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-kube-api-access-4mhrs\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656207 4949 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656217 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656225 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656233 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:15 crc kubenswrapper[4949]: I0120 15:31:15.656242 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb1d8e10-2c84-4a8f-a3d0-653432297fb1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.062266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" event={"ID":"eb1d8e10-2c84-4a8f-a3d0-653432297fb1","Type":"ContainerDied","Data":"9536fad1bacc9034cf76c961a09438499cdeeda973474821cb5591e72cc72834"} Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.062328 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7j58g" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.062347 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9536fad1bacc9034cf76c961a09438499cdeeda973474821cb5591e72cc72834" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.161564 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf"] Jan 20 15:31:16 crc kubenswrapper[4949]: E0120 15:31:16.162112 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerName="collect-profiles" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162132 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerName="collect-profiles" Jan 20 15:31:16 crc kubenswrapper[4949]: E0120 15:31:16.162156 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162165 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162445 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1d8e10-2c84-4a8f-a3d0-653432297fb1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.162480 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f7e2a1-deca-4c82-928a-bab4bd7d6620" containerName="collect-profiles" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.163113 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.165925 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.166117 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.166209 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.167156 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.169494 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.170291 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.174393 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.188644 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf"] Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267470 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267552 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267584 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267610 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267645 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267666 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.267710 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.368948 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369038 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369075 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369099 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369143 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.369194 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.374442 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.381548 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.381809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.382159 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.382267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.383206 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.384754 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:16 crc kubenswrapper[4949]: I0120 15:31:16.481756 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:31:17 crc kubenswrapper[4949]: I0120 15:31:17.013538 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf"] Jan 20 15:31:17 crc kubenswrapper[4949]: I0120 15:31:17.074648 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerStarted","Data":"958a6ad4ddf1df7b835d62486cb75542552b1f8e543ba46f928986491ecb2fbf"} Jan 20 15:31:18 crc kubenswrapper[4949]: I0120 15:31:18.082628 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerStarted","Data":"9e6bd56465eba918050013087e291973ba2f51b53db86a1b50dea1710cedc6c7"} Jan 20 15:31:18 crc kubenswrapper[4949]: I0120 15:31:18.107568 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" podStartSLOduration=1.377738758 podStartE2EDuration="2.107547409s" podCreationTimestamp="2026-01-20 15:31:16 +0000 UTC" firstStartedPulling="2026-01-20 15:31:17.015016381 +0000 UTC m=+2472.824847239" lastFinishedPulling="2026-01-20 15:31:17.744825022 +0000 UTC m=+2473.554655890" observedRunningTime="2026-01-20 15:31:18.10076836 +0000 UTC m=+2473.910599218" watchObservedRunningTime="2026-01-20 15:31:18.107547409 +0000 UTC m=+2473.917378267" Jan 20 15:31:24 crc kubenswrapper[4949]: I0120 15:31:24.804573 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:24 crc kubenswrapper[4949]: E0120 15:31:24.805129 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:37 crc kubenswrapper[4949]: I0120 15:31:37.789375 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:37 crc kubenswrapper[4949]: E0120 15:31:37.791148 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:31:51 crc kubenswrapper[4949]: I0120 15:31:51.789483 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:31:51 crc kubenswrapper[4949]: E0120 15:31:51.790429 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:32:04 crc kubenswrapper[4949]: I0120 15:32:04.796863 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:32:05 crc kubenswrapper[4949]: I0120 15:32:05.494283 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea"} Jan 20 15:32:26 crc kubenswrapper[4949]: I0120 15:32:26.679797 4949 generic.go:334] "Generic (PLEG): container finished" podID="a6c12b14-7d12-46ea-be9c-15789d700112" containerID="9e6bd56465eba918050013087e291973ba2f51b53db86a1b50dea1710cedc6c7" exitCode=0 Jan 20 15:32:26 crc kubenswrapper[4949]: I0120 15:32:26.679882 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerDied","Data":"9e6bd56465eba918050013087e291973ba2f51b53db86a1b50dea1710cedc6c7"} Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.231529 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311618 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311711 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311778 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311806 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.311961 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.312054 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.312099 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") pod \"a6c12b14-7d12-46ea-be9c-15789d700112\" (UID: \"a6c12b14-7d12-46ea-be9c-15789d700112\") " Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.318716 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x" (OuterVolumeSpecName: "kube-api-access-gsr7x") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "kube-api-access-gsr7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.318742 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph" (OuterVolumeSpecName: "ceph") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.319282 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.340058 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory" (OuterVolumeSpecName: "inventory") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.345766 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.348143 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.350963 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a6c12b14-7d12-46ea-be9c-15789d700112" (UID: "a6c12b14-7d12-46ea-be9c-15789d700112"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414070 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414119 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414137 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414152 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414167 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsr7x\" (UniqueName: \"kubernetes.io/projected/a6c12b14-7d12-46ea-be9c-15789d700112-kube-api-access-gsr7x\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414184 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.414195 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6c12b14-7d12-46ea-be9c-15789d700112-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.699975 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" event={"ID":"a6c12b14-7d12-46ea-be9c-15789d700112","Type":"ContainerDied","Data":"958a6ad4ddf1df7b835d62486cb75542552b1f8e543ba46f928986491ecb2fbf"} Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.700407 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958a6ad4ddf1df7b835d62486cb75542552b1f8e543ba46f928986491ecb2fbf" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.700020 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.847201 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns"] Jan 20 15:32:28 crc kubenswrapper[4949]: E0120 15:32:28.847996 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c12b14-7d12-46ea-be9c-15789d700112" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.848022 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c12b14-7d12-46ea-be9c-15789d700112" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.848573 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c12b14-7d12-46ea-be9c-15789d700112" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.849637 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854273 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854506 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854609 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.854846 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.855246 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.855244 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:32:28 crc kubenswrapper[4949]: I0120 15:32:28.857646 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns"] Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063633 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063785 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063873 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.063918 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.164954 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165005 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165037 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165185 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.165226 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.169653 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.169974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.170033 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.171707 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.171981 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.181222 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9xbns\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:29 crc kubenswrapper[4949]: I0120 15:32:29.479570 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:32:30 crc kubenswrapper[4949]: I0120 15:32:30.059149 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns"] Jan 20 15:32:30 crc kubenswrapper[4949]: W0120 15:32:30.063318 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccd4282a_7ba2_4eda_9078_00d3f0ff58c4.slice/crio-31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f WatchSource:0}: Error finding container 31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f: Status 404 returned error can't find the container with id 31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f Jan 20 15:32:30 crc kubenswrapper[4949]: I0120 15:32:30.716694 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerStarted","Data":"31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f"} Jan 20 15:32:31 crc kubenswrapper[4949]: I0120 15:32:31.727244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerStarted","Data":"43cfd0eb10ad3989bcad841d90f1f6cdaa4d3595269e195d20ba1d514e80e53f"} Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.691324 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" podStartSLOduration=92.03315455 podStartE2EDuration="1m32.691300963s" podCreationTimestamp="2026-01-20 15:32:28 +0000 UTC" firstStartedPulling="2026-01-20 15:32:30.066256105 +0000 UTC m=+2545.876086963" lastFinishedPulling="2026-01-20 15:32:30.724402518 +0000 UTC m=+2546.534233376" observedRunningTime="2026-01-20 15:32:31.752897063 +0000 UTC m=+2547.562727951" watchObservedRunningTime="2026-01-20 15:34:00.691300963 +0000 UTC m=+2636.501131821" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.698813 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.701718 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.709580 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.726546 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.727839 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.728146 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.830661 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831172 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831753 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.831869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:00 crc kubenswrapper[4949]: I0120 15:34:00.853343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"redhat-marketplace-ln9wm\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:01 crc kubenswrapper[4949]: I0120 15:34:01.026061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:01 crc kubenswrapper[4949]: I0120 15:34:01.499797 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:01 crc kubenswrapper[4949]: I0120 15:34:01.610622 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerStarted","Data":"5ca3f3b3d5d28fc595329025f84cb3ba0155bfb1f85916545dd5b6c003b93b77"} Jan 20 15:34:02 crc kubenswrapper[4949]: I0120 15:34:02.620193 4949 generic.go:334] "Generic (PLEG): container finished" podID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" exitCode=0 Jan 20 15:34:02 crc kubenswrapper[4949]: I0120 15:34:02.620306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7"} Jan 20 15:34:02 crc kubenswrapper[4949]: I0120 15:34:02.622740 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:34:04 crc kubenswrapper[4949]: I0120 15:34:04.642946 4949 generic.go:334] "Generic (PLEG): container finished" podID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" exitCode=0 Jan 20 15:34:04 crc kubenswrapper[4949]: I0120 15:34:04.643054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1"} Jan 20 15:34:06 crc kubenswrapper[4949]: I0120 15:34:06.678745 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerStarted","Data":"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f"} Jan 20 15:34:06 crc kubenswrapper[4949]: I0120 15:34:06.709011 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ln9wm" podStartSLOduration=3.75691968 podStartE2EDuration="6.708986237s" podCreationTimestamp="2026-01-20 15:34:00 +0000 UTC" firstStartedPulling="2026-01-20 15:34:02.622508073 +0000 UTC m=+2638.432338931" lastFinishedPulling="2026-01-20 15:34:05.57457463 +0000 UTC m=+2641.384405488" observedRunningTime="2026-01-20 15:34:06.702427734 +0000 UTC m=+2642.512258592" watchObservedRunningTime="2026-01-20 15:34:06.708986237 +0000 UTC m=+2642.518817095" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.026933 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.027748 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.102269 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.763065 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:11 crc kubenswrapper[4949]: I0120 15:34:11.822430 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:13 crc kubenswrapper[4949]: I0120 15:34:13.733807 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ln9wm" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" containerID="cri-o://0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" gracePeriod=2 Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.204290 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.309907 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") pod \"83ae6c9a-a314-4dc0-9859-1febb6555498\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.310010 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") pod \"83ae6c9a-a314-4dc0-9859-1febb6555498\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.310252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") pod \"83ae6c9a-a314-4dc0-9859-1febb6555498\" (UID: \"83ae6c9a-a314-4dc0-9859-1febb6555498\") " Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.311091 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities" (OuterVolumeSpecName: "utilities") pod "83ae6c9a-a314-4dc0-9859-1febb6555498" (UID: "83ae6c9a-a314-4dc0-9859-1febb6555498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.324672 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc" (OuterVolumeSpecName: "kube-api-access-5jdxc") pod "83ae6c9a-a314-4dc0-9859-1febb6555498" (UID: "83ae6c9a-a314-4dc0-9859-1febb6555498"). InnerVolumeSpecName "kube-api-access-5jdxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.333311 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ae6c9a-a314-4dc0-9859-1febb6555498" (UID: "83ae6c9a-a314-4dc0-9859-1febb6555498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.412327 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.412382 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jdxc\" (UniqueName: \"kubernetes.io/projected/83ae6c9a-a314-4dc0-9859-1febb6555498-kube-api-access-5jdxc\") on node \"crc\" DevicePath \"\"" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.412394 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ae6c9a-a314-4dc0-9859-1febb6555498-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744013 4949 generic.go:334] "Generic (PLEG): container finished" podID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" exitCode=0 Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744313 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln9wm" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f"} Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744443 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln9wm" event={"ID":"83ae6c9a-a314-4dc0-9859-1febb6555498","Type":"ContainerDied","Data":"5ca3f3b3d5d28fc595329025f84cb3ba0155bfb1f85916545dd5b6c003b93b77"} Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.744470 4949 scope.go:117] "RemoveContainer" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.776843 4949 scope.go:117] "RemoveContainer" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.810258 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.810327 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln9wm"] Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.819706 4949 scope.go:117] "RemoveContainer" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.848101 4949 scope.go:117] "RemoveContainer" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" Jan 20 15:34:14 crc kubenswrapper[4949]: E0120 15:34:14.848927 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f\": container with ID starting with 0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f not found: ID does not exist" containerID="0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.848985 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f"} err="failed to get container status \"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f\": rpc error: code = NotFound desc = could not find container \"0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f\": container with ID starting with 0f579191d95f4ab552ec8aadf24020757210f70aebe7974c9bd0d4ba69dbb01f not found: ID does not exist" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.849021 4949 scope.go:117] "RemoveContainer" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" Jan 20 15:34:14 crc kubenswrapper[4949]: E0120 15:34:14.850248 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1\": container with ID starting with a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1 not found: ID does not exist" containerID="a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.850302 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1"} err="failed to get container status \"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1\": rpc error: code = NotFound desc = could not find container \"a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1\": container with ID starting with a547afeeb5ab7248a7187b9b42558f74cb2633811a8285c0c73634fcec6fe2a1 not found: ID does not exist" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.850333 4949 scope.go:117] "RemoveContainer" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" Jan 20 15:34:14 crc kubenswrapper[4949]: E0120 15:34:14.850781 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7\": container with ID starting with 4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7 not found: ID does not exist" containerID="4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7" Jan 20 15:34:14 crc kubenswrapper[4949]: I0120 15:34:14.850840 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7"} err="failed to get container status \"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7\": rpc error: code = NotFound desc = could not find container \"4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7\": container with ID starting with 4b2d5a83919599e6e2cc5cd8d8a946f377972d76dfc6eb5348c7ea40673b46d7 not found: ID does not exist" Jan 20 15:34:16 crc kubenswrapper[4949]: I0120 15:34:16.802082 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" path="/var/lib/kubelet/pods/83ae6c9a-a314-4dc0-9859-1febb6555498/volumes" Jan 20 15:34:27 crc kubenswrapper[4949]: I0120 15:34:27.152501 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:34:27 crc kubenswrapper[4949]: I0120 15:34:27.152979 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.152385 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.153085 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.919151 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:34:57 crc kubenswrapper[4949]: E0120 15:34:57.920002 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-utilities" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920028 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-utilities" Jan 20 15:34:57 crc kubenswrapper[4949]: E0120 15:34:57.920064 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-content" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920073 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="extract-content" Jan 20 15:34:57 crc kubenswrapper[4949]: E0120 15:34:57.920090 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920099 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.920303 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ae6c9a-a314-4dc0-9859-1febb6555498" containerName="registry-server" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.921807 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:57 crc kubenswrapper[4949]: I0120 15:34:57.939902 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.054427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.054561 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.054593 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.155821 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.155875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.155941 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.156376 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.158617 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.175511 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"community-operators-4h62x\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.242582 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:34:58 crc kubenswrapper[4949]: I0120 15:34:58.825398 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:34:59 crc kubenswrapper[4949]: I0120 15:34:59.157583 4949 generic.go:334] "Generic (PLEG): container finished" podID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerID="b25eb601db495762aa2c1dce730d0bc786cef26614edc2df7ad8c09198618acd" exitCode=0 Jan 20 15:34:59 crc kubenswrapper[4949]: I0120 15:34:59.157724 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"b25eb601db495762aa2c1dce730d0bc786cef26614edc2df7ad8c09198618acd"} Jan 20 15:34:59 crc kubenswrapper[4949]: I0120 15:34:59.157864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerStarted","Data":"e222a46ace7165c925fc4bbde5985238a248b2ea1528cbed31f9f276ca123e73"} Jan 20 15:35:01 crc kubenswrapper[4949]: I0120 15:35:01.174666 4949 generic.go:334] "Generic (PLEG): container finished" podID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerID="76a5595e5cd26fffaa0ceb9cde98dcd008151ee5eae0290b95de7858f3eded5f" exitCode=0 Jan 20 15:35:01 crc kubenswrapper[4949]: I0120 15:35:01.174775 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"76a5595e5cd26fffaa0ceb9cde98dcd008151ee5eae0290b95de7858f3eded5f"} Jan 20 15:35:03 crc kubenswrapper[4949]: I0120 15:35:03.202673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerStarted","Data":"0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25"} Jan 20 15:35:03 crc kubenswrapper[4949]: I0120 15:35:03.224394 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4h62x" podStartSLOduration=3.315814551 podStartE2EDuration="6.224370967s" podCreationTimestamp="2026-01-20 15:34:57 +0000 UTC" firstStartedPulling="2026-01-20 15:34:59.159048515 +0000 UTC m=+2694.968879373" lastFinishedPulling="2026-01-20 15:35:02.067604911 +0000 UTC m=+2697.877435789" observedRunningTime="2026-01-20 15:35:03.217485544 +0000 UTC m=+2699.027316412" watchObservedRunningTime="2026-01-20 15:35:03.224370967 +0000 UTC m=+2699.034201825" Jan 20 15:35:08 crc kubenswrapper[4949]: I0120 15:35:08.243583 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:08 crc kubenswrapper[4949]: I0120 15:35:08.244575 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:08 crc kubenswrapper[4949]: I0120 15:35:08.316851 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:09 crc kubenswrapper[4949]: I0120 15:35:09.304489 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:09 crc kubenswrapper[4949]: I0120 15:35:09.362090 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:35:11 crc kubenswrapper[4949]: I0120 15:35:11.276837 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4h62x" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" containerID="cri-o://0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25" gracePeriod=2 Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.291437 4949 generic.go:334] "Generic (PLEG): container finished" podID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerID="0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25" exitCode=0 Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.291524 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25"} Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.292208 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4h62x" event={"ID":"94bdc9ae-4946-48f5-8aa5-15a138c85b14","Type":"ContainerDied","Data":"e222a46ace7165c925fc4bbde5985238a248b2ea1528cbed31f9f276ca123e73"} Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.292229 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e222a46ace7165c925fc4bbde5985238a248b2ea1528cbed31f9f276ca123e73" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.354871 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.459042 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") pod \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.459204 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") pod \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.459265 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") pod \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\" (UID: \"94bdc9ae-4946-48f5-8aa5-15a138c85b14\") " Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.460024 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities" (OuterVolumeSpecName: "utilities") pod "94bdc9ae-4946-48f5-8aa5-15a138c85b14" (UID: "94bdc9ae-4946-48f5-8aa5-15a138c85b14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.465721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj" (OuterVolumeSpecName: "kube-api-access-b6zbj") pod "94bdc9ae-4946-48f5-8aa5-15a138c85b14" (UID: "94bdc9ae-4946-48f5-8aa5-15a138c85b14"). InnerVolumeSpecName "kube-api-access-b6zbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.522128 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94bdc9ae-4946-48f5-8aa5-15a138c85b14" (UID: "94bdc9ae-4946-48f5-8aa5-15a138c85b14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.561407 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zbj\" (UniqueName: \"kubernetes.io/projected/94bdc9ae-4946-48f5-8aa5-15a138c85b14-kube-api-access-b6zbj\") on node \"crc\" DevicePath \"\"" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.561463 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:35:12 crc kubenswrapper[4949]: I0120 15:35:12.561475 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94bdc9ae-4946-48f5-8aa5-15a138c85b14-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:35:13 crc kubenswrapper[4949]: I0120 15:35:13.298755 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4h62x" Jan 20 15:35:13 crc kubenswrapper[4949]: I0120 15:35:13.321569 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:35:13 crc kubenswrapper[4949]: I0120 15:35:13.328973 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4h62x"] Jan 20 15:35:14 crc kubenswrapper[4949]: I0120 15:35:14.798128 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" path="/var/lib/kubelet/pods/94bdc9ae-4946-48f5-8aa5-15a138c85b14/volumes" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.152508 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.153176 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.153225 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.154132 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.154238 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea" gracePeriod=600 Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.429882 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea" exitCode=0 Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.429969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea"} Jan 20 15:35:27 crc kubenswrapper[4949]: I0120 15:35:27.430281 4949 scope.go:117] "RemoveContainer" containerID="452573d3fc23e68e8e13a2f0eebc1b22d508c4f3bec42b206af18e9665729223" Jan 20 15:35:28 crc kubenswrapper[4949]: I0120 15:35:28.442178 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17"} Jan 20 15:37:18 crc kubenswrapper[4949]: I0120 15:37:18.433486 4949 generic.go:334] "Generic (PLEG): container finished" podID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerID="43cfd0eb10ad3989bcad841d90f1f6cdaa4d3595269e195d20ba1d514e80e53f" exitCode=0 Jan 20 15:37:18 crc kubenswrapper[4949]: I0120 15:37:18.433575 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerDied","Data":"43cfd0eb10ad3989bcad841d90f1f6cdaa4d3595269e195d20ba1d514e80e53f"} Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.897198 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993429 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993596 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993619 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993671 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.993774 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") pod \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\" (UID: \"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4\") " Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.999592 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l" (OuterVolumeSpecName: "kube-api-access-wnx8l") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "kube-api-access-wnx8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:37:19 crc kubenswrapper[4949]: I0120 15:37:19.999600 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph" (OuterVolumeSpecName: "ceph") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.003872 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.042135 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.045852 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.046415 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory" (OuterVolumeSpecName: "inventory") pod "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" (UID: "ccd4282a-7ba2-4eda-9078-00d3f0ff58c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096140 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096181 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnx8l\" (UniqueName: \"kubernetes.io/projected/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-kube-api-access-wnx8l\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096195 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096206 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096219 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.096230 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd4282a-7ba2-4eda-9078-00d3f0ff58c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.457051 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" event={"ID":"ccd4282a-7ba2-4eda-9078-00d3f0ff58c4","Type":"ContainerDied","Data":"31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f"} Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.457094 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31b235f119358d27b3b64650c72504b883983eb954fe5737b0d9b293ea70528f" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.457134 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9xbns" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.582366 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff"] Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.582809 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-content" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583020 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-content" Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.583041 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583051 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.583081 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-utilities" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583090 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="extract-utilities" Jan 20 15:37:20 crc kubenswrapper[4949]: E0120 15:37:20.583107 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583118 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583363 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="94bdc9ae-4946-48f5-8aa5-15a138c85b14" containerName="registry-server" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.583404 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd4282a-7ba2-4eda-9078-00d3f0ff58c4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.584148 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596069 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596336 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596538 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596721 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.596973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cfbwp" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597169 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597338 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597503 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.597670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.611881 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff"] Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707352 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707388 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707532 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707671 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707745 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.707916 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.708017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.708047 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809194 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809269 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809347 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809404 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809434 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809468 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809489 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809559 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.809658 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.810435 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.810867 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.814623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.815209 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.815453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.816045 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.816103 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.818174 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.818657 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.828070 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.829130 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:20 crc kubenswrapper[4949]: I0120 15:37:20.913551 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:37:21 crc kubenswrapper[4949]: I0120 15:37:21.455943 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff"] Jan 20 15:37:22 crc kubenswrapper[4949]: I0120 15:37:22.479597 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerStarted","Data":"c6f70da926b771c5d1c2f1ccd50cc7324ccd775fb73876066b3a4d6d02b7e43a"} Jan 20 15:37:22 crc kubenswrapper[4949]: I0120 15:37:22.480002 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerStarted","Data":"311b2a18e1d8378252caca1377c9d806a5e7a75e15f5a57cd03a24147cb2b537"} Jan 20 15:37:22 crc kubenswrapper[4949]: I0120 15:37:22.509723 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" podStartSLOduration=2.062464891 podStartE2EDuration="2.509697072s" podCreationTimestamp="2026-01-20 15:37:20 +0000 UTC" firstStartedPulling="2026-01-20 15:37:21.463617508 +0000 UTC m=+2837.273448366" lastFinishedPulling="2026-01-20 15:37:21.910849679 +0000 UTC m=+2837.720680547" observedRunningTime="2026-01-20 15:37:22.501919833 +0000 UTC m=+2838.311750701" watchObservedRunningTime="2026-01-20 15:37:22.509697072 +0000 UTC m=+2838.319527930" Jan 20 15:37:27 crc kubenswrapper[4949]: I0120 15:37:27.151965 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:37:27 crc kubenswrapper[4949]: I0120 15:37:27.152510 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:37:57 crc kubenswrapper[4949]: I0120 15:37:57.152325 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:37:57 crc kubenswrapper[4949]: I0120 15:37:57.155197 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.152136 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.152731 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.152779 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.153548 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:38:27 crc kubenswrapper[4949]: I0120 15:38:27.153598 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" gracePeriod=600 Jan 20 15:38:27 crc kubenswrapper[4949]: E0120 15:38:27.372143 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.074429 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" exitCode=0 Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.074538 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17"} Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.074824 4949 scope.go:117] "RemoveContainer" containerID="42e82102f582b55474c07636502036b7621613b6975293f9f00dce1f9b3635ea" Jan 20 15:38:28 crc kubenswrapper[4949]: I0120 15:38:28.075507 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:38:28 crc kubenswrapper[4949]: E0120 15:38:28.075835 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:38:40 crc kubenswrapper[4949]: I0120 15:38:40.788884 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:38:40 crc kubenswrapper[4949]: E0120 15:38:40.789833 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:38:52 crc kubenswrapper[4949]: I0120 15:38:52.789262 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:38:52 crc kubenswrapper[4949]: E0120 15:38:52.790365 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:05 crc kubenswrapper[4949]: I0120 15:39:05.789129 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:05 crc kubenswrapper[4949]: E0120 15:39:05.789976 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.535260 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.539212 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.562215 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.642730 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.642835 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.642922 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.745989 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.746216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.746318 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.746992 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.747048 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.770946 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"certified-operators-9dsht\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:17 crc kubenswrapper[4949]: I0120 15:39:17.876993 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:18 crc kubenswrapper[4949]: I0120 15:39:18.402382 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:18 crc kubenswrapper[4949]: I0120 15:39:18.553655 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerStarted","Data":"a94e74f233c2ad8e5fbd2dcc0d41236255c1ea5d5a8af535972972a863634d26"} Jan 20 15:39:19 crc kubenswrapper[4949]: I0120 15:39:19.566302 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerID="9f27516f93a82a5ec8321727892970bbe72577d3f4d730422035ab9d2694235d" exitCode=0 Jan 20 15:39:19 crc kubenswrapper[4949]: I0120 15:39:19.566369 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"9f27516f93a82a5ec8321727892970bbe72577d3f4d730422035ab9d2694235d"} Jan 20 15:39:19 crc kubenswrapper[4949]: I0120 15:39:19.573040 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:39:20 crc kubenswrapper[4949]: I0120 15:39:20.578225 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerStarted","Data":"1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd"} Jan 20 15:39:20 crc kubenswrapper[4949]: I0120 15:39:20.789263 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:20 crc kubenswrapper[4949]: E0120 15:39:20.789545 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.382231 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.384856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.401948 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.533721 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.533787 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.533883 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.590085 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd"} Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.589938 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerID="1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd" exitCode=0 Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.635635 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.635832 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.635866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.636251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.636805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.666753 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"redhat-operators-zlpwm\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:21 crc kubenswrapper[4949]: I0120 15:39:21.713286 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.210028 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:22 crc kubenswrapper[4949]: W0120 15:39:22.218494 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f342955_4a85_4515_a30f_4df633975c84.slice/crio-a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0 WatchSource:0}: Error finding container a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0: Status 404 returned error can't find the container with id a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0 Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.603185 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f342955-4a85-4515-a30f-4df633975c84" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" exitCode=0 Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.603246 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544"} Jan 20 15:39:22 crc kubenswrapper[4949]: I0120 15:39:22.603277 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerStarted","Data":"a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0"} Jan 20 15:39:24 crc kubenswrapper[4949]: I0120 15:39:24.622607 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerStarted","Data":"c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c"} Jan 20 15:39:24 crc kubenswrapper[4949]: I0120 15:39:24.632972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerStarted","Data":"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0"} Jan 20 15:39:24 crc kubenswrapper[4949]: I0120 15:39:24.647243 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dsht" podStartSLOduration=3.480564975 podStartE2EDuration="7.647226325s" podCreationTimestamp="2026-01-20 15:39:17 +0000 UTC" firstStartedPulling="2026-01-20 15:39:19.572610661 +0000 UTC m=+2955.382441559" lastFinishedPulling="2026-01-20 15:39:23.739272051 +0000 UTC m=+2959.549102909" observedRunningTime="2026-01-20 15:39:24.646284986 +0000 UTC m=+2960.456115844" watchObservedRunningTime="2026-01-20 15:39:24.647226325 +0000 UTC m=+2960.457057183" Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.662052 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f342955-4a85-4515-a30f-4df633975c84" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" exitCode=0 Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.662160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0"} Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.877479 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.877792 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:27 crc kubenswrapper[4949]: I0120 15:39:27.922898 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:29 crc kubenswrapper[4949]: I0120 15:39:29.683572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerStarted","Data":"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291"} Jan 20 15:39:29 crc kubenswrapper[4949]: I0120 15:39:29.710464 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zlpwm" podStartSLOduration=2.518769884 podStartE2EDuration="8.71043884s" podCreationTimestamp="2026-01-20 15:39:21 +0000 UTC" firstStartedPulling="2026-01-20 15:39:22.605505652 +0000 UTC m=+2958.415336520" lastFinishedPulling="2026-01-20 15:39:28.797174608 +0000 UTC m=+2964.607005476" observedRunningTime="2026-01-20 15:39:29.707027752 +0000 UTC m=+2965.516858630" watchObservedRunningTime="2026-01-20 15:39:29.71043884 +0000 UTC m=+2965.520269708" Jan 20 15:39:29 crc kubenswrapper[4949]: I0120 15:39:29.737242 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:31 crc kubenswrapper[4949]: I0120 15:39:31.714341 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:31 crc kubenswrapper[4949]: I0120 15:39:31.714974 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.522090 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.522403 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dsht" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" containerID="cri-o://c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c" gracePeriod=2 Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.710618 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerID="c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c" exitCode=0 Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.710710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c"} Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.765402 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zlpwm" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" probeResult="failure" output=< Jan 20 15:39:32 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:39:32 crc kubenswrapper[4949]: > Jan 20 15:39:32 crc kubenswrapper[4949]: I0120 15:39:32.961909 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.097797 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") pod \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.097906 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") pod \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.097937 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") pod \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\" (UID: \"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058\") " Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.098752 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities" (OuterVolumeSpecName: "utilities") pod "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" (UID: "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.106893 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv" (OuterVolumeSpecName: "kube-api-access-2kzkv") pod "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" (UID: "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058"). InnerVolumeSpecName "kube-api-access-2kzkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.153885 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" (UID: "9f4a7bfc-59ed-46b7-a67f-1ac1423ea058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.199801 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kzkv\" (UniqueName: \"kubernetes.io/projected/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-kube-api-access-2kzkv\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.199836 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.199849 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.723737 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dsht" event={"ID":"9f4a7bfc-59ed-46b7-a67f-1ac1423ea058","Type":"ContainerDied","Data":"a94e74f233c2ad8e5fbd2dcc0d41236255c1ea5d5a8af535972972a863634d26"} Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.723794 4949 scope.go:117] "RemoveContainer" containerID="c921fed5dda3a4c0020f12561a5a849caa93a978e6f17d837ace3dd8230d7f0c" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.723815 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dsht" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.756265 4949 scope.go:117] "RemoveContainer" containerID="1d02ae655afde2533b1b1facf993b80be8aa0aeec50e734f1d71045e112da8bd" Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.760285 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.770434 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dsht"] Jan 20 15:39:33 crc kubenswrapper[4949]: I0120 15:39:33.778661 4949 scope.go:117] "RemoveContainer" containerID="9f27516f93a82a5ec8321727892970bbe72577d3f4d730422035ab9d2694235d" Jan 20 15:39:34 crc kubenswrapper[4949]: I0120 15:39:34.796131 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:34 crc kubenswrapper[4949]: E0120 15:39:34.796908 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:39:34 crc kubenswrapper[4949]: I0120 15:39:34.800304 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" path="/var/lib/kubelet/pods/9f4a7bfc-59ed-46b7-a67f-1ac1423ea058/volumes" Jan 20 15:39:41 crc kubenswrapper[4949]: I0120 15:39:41.767957 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:41 crc kubenswrapper[4949]: I0120 15:39:41.842686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:42 crc kubenswrapper[4949]: I0120 15:39:42.013374 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:42 crc kubenswrapper[4949]: I0120 15:39:42.816384 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zlpwm" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" containerID="cri-o://06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" gracePeriod=2 Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.313435 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.398805 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") pod \"1f342955-4a85-4515-a30f-4df633975c84\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.398905 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") pod \"1f342955-4a85-4515-a30f-4df633975c84\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.399121 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") pod \"1f342955-4a85-4515-a30f-4df633975c84\" (UID: \"1f342955-4a85-4515-a30f-4df633975c84\") " Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.400291 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities" (OuterVolumeSpecName: "utilities") pod "1f342955-4a85-4515-a30f-4df633975c84" (UID: "1f342955-4a85-4515-a30f-4df633975c84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.410095 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz" (OuterVolumeSpecName: "kube-api-access-s9vtz") pod "1f342955-4a85-4515-a30f-4df633975c84" (UID: "1f342955-4a85-4515-a30f-4df633975c84"). InnerVolumeSpecName "kube-api-access-s9vtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.501822 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.501857 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9vtz\" (UniqueName: \"kubernetes.io/projected/1f342955-4a85-4515-a30f-4df633975c84-kube-api-access-s9vtz\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.556909 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f342955-4a85-4515-a30f-4df633975c84" (UID: "1f342955-4a85-4515-a30f-4df633975c84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.603209 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f342955-4a85-4515-a30f-4df633975c84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825687 4949 generic.go:334] "Generic (PLEG): container finished" podID="1f342955-4a85-4515-a30f-4df633975c84" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" exitCode=0 Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825736 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291"} Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825765 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlpwm" event={"ID":"1f342955-4a85-4515-a30f-4df633975c84","Type":"ContainerDied","Data":"a1dff033ca91639681da1e8b10d1f0bfeb48dfa10ab64d7de02f379cbbd204e0"} Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825785 4949 scope.go:117] "RemoveContainer" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.825741 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlpwm" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.848870 4949 scope.go:117] "RemoveContainer" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.865340 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.878425 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zlpwm"] Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.887500 4949 scope.go:117] "RemoveContainer" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915155 4949 scope.go:117] "RemoveContainer" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" Jan 20 15:39:43 crc kubenswrapper[4949]: E0120 15:39:43.915564 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291\": container with ID starting with 06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291 not found: ID does not exist" containerID="06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915599 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291"} err="failed to get container status \"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291\": rpc error: code = NotFound desc = could not find container \"06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291\": container with ID starting with 06fe8cf6ee80b8c675f7c01bc927d33174bb484fa8ff69cdf139eff4720c6291 not found: ID does not exist" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915623 4949 scope.go:117] "RemoveContainer" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" Jan 20 15:39:43 crc kubenswrapper[4949]: E0120 15:39:43.915875 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0\": container with ID starting with 4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0 not found: ID does not exist" containerID="4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915898 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0"} err="failed to get container status \"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0\": rpc error: code = NotFound desc = could not find container \"4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0\": container with ID starting with 4297ec067afef97d5e38fb3f54e6f45efb6c8584ef4a10dcc3f3b16ddddeb6f0 not found: ID does not exist" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.915912 4949 scope.go:117] "RemoveContainer" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" Jan 20 15:39:43 crc kubenswrapper[4949]: E0120 15:39:43.916101 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544\": container with ID starting with b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544 not found: ID does not exist" containerID="b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544" Jan 20 15:39:43 crc kubenswrapper[4949]: I0120 15:39:43.916121 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544"} err="failed to get container status \"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544\": rpc error: code = NotFound desc = could not find container \"b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544\": container with ID starting with b84d5cee49f6bf4b8a7d3b360630fbfaaf5382c2d2acbedbfe9a1064b7058544 not found: ID does not exist" Jan 20 15:39:44 crc kubenswrapper[4949]: I0120 15:39:44.803466 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f342955-4a85-4515-a30f-4df633975c84" path="/var/lib/kubelet/pods/1f342955-4a85-4515-a30f-4df633975c84/volumes" Jan 20 15:39:48 crc kubenswrapper[4949]: I0120 15:39:48.789130 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:39:48 crc kubenswrapper[4949]: E0120 15:39:48.790012 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:02 crc kubenswrapper[4949]: I0120 15:40:02.788720 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:02 crc kubenswrapper[4949]: E0120 15:40:02.789620 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:13 crc kubenswrapper[4949]: I0120 15:40:13.790713 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:13 crc kubenswrapper[4949]: E0120 15:40:13.791637 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:15 crc kubenswrapper[4949]: I0120 15:40:15.119788 4949 generic.go:334] "Generic (PLEG): container finished" podID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerID="c6f70da926b771c5d1c2f1ccd50cc7324ccd775fb73876066b3a4d6d02b7e43a" exitCode=0 Jan 20 15:40:15 crc kubenswrapper[4949]: I0120 15:40:15.119864 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerDied","Data":"c6f70da926b771c5d1c2f1ccd50cc7324ccd775fb73876066b3a4d6d02b7e43a"} Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.566057 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737707 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737777 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737853 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737884 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.737970 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738005 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738079 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738216 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738251 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.738286 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") pod \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\" (UID: \"97b58b41-5a8f-47f7-af93-382d7a6f0e69\") " Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.744467 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph" (OuterVolumeSpecName: "ceph") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.760218 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm" (OuterVolumeSpecName: "kube-api-access-2rfbm") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "kube-api-access-2rfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.760480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.763959 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.769062 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.771477 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.779543 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory" (OuterVolumeSpecName: "inventory") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.782816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.783480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.783645 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.784991 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "97b58b41-5a8f-47f7-af93-382d7a6f0e69" (UID: "97b58b41-5a8f-47f7-af93-382d7a6f0e69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.840745 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfbm\" (UniqueName: \"kubernetes.io/projected/97b58b41-5a8f-47f7-af93-382d7a6f0e69-kube-api-access-2rfbm\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841206 4949 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841267 4949 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841355 4949 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841439 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-inventory\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841512 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841607 4949 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841667 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841721 4949 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841781 4949 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:16 crc kubenswrapper[4949]: I0120 15:40:16.841842 4949 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b58b41-5a8f-47f7-af93-382d7a6f0e69-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:17 crc kubenswrapper[4949]: I0120 15:40:17.142821 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" event={"ID":"97b58b41-5a8f-47f7-af93-382d7a6f0e69","Type":"ContainerDied","Data":"311b2a18e1d8378252caca1377c9d806a5e7a75e15f5a57cd03a24147cb2b537"} Jan 20 15:40:17 crc kubenswrapper[4949]: I0120 15:40:17.142869 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="311b2a18e1d8378252caca1377c9d806a5e7a75e15f5a57cd03a24147cb2b537" Jan 20 15:40:17 crc kubenswrapper[4949]: I0120 15:40:17.142880 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff" Jan 20 15:40:26 crc kubenswrapper[4949]: I0120 15:40:26.789617 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:26 crc kubenswrapper[4949]: E0120 15:40:26.790394 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.301162 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.303965 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.303990 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304022 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304031 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304060 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304069 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304185 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304197 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304216 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304226 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304256 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304270 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="extract-content" Jan 20 15:40:32 crc kubenswrapper[4949]: E0120 15:40:32.304302 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304310 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="extract-utilities" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304816 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b58b41-5a8f-47f7-af93-382d7a6f0e69" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304871 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f342955-4a85-4515-a30f-4df633975c84" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.304959 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4a7bfc-59ed-46b7-a67f-1ac1423ea058" containerName="registry-server" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.307050 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.312012 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.312243 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332375 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332443 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332487 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332583 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332633 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332707 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332800 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-run\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332906 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzsxv\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-kube-api-access-bzsxv\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332939 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.332980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333024 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333084 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333215 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333263 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.333341 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.345864 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.368952 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.382565 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.382656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.386305 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435533 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435602 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-ceph\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435704 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-scripts\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-run\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435777 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435813 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45jx\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-kube-api-access-d45jx\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435889 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-sys\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435928 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-run\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435954 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.435969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436033 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436074 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-dev\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzsxv\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-kube-api-access-bzsxv\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436122 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436177 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436202 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436264 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436277 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436290 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436313 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436335 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-run\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436384 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436821 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.436895 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437661 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437781 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437810 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437831 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437890 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-lib-modules\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437914 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.437937 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.438119 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.438250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.438451 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83382677-6882-49eb-a111-498346e2d6dc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.443265 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.443668 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.443888 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.448342 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.448955 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83382677-6882-49eb-a111-498346e2d6dc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.457413 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzsxv\" (UniqueName: \"kubernetes.io/projected/83382677-6882-49eb-a111-498346e2d6dc-kube-api-access-bzsxv\") pod \"cinder-volume-volume1-0\" (UID: \"83382677-6882-49eb-a111-498346e2d6dc\") " pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539697 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-dev\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539758 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539781 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539814 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539841 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539900 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539934 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539964 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-lib-modules\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.539988 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-ceph\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540051 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-scripts\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540073 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-run\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540137 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45jx\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-kube-api-access-d45jx\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-sys\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540219 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540336 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-dev\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.540408 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541044 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541112 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541165 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.541195 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-run\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.544023 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.544777 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-ceph\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.545093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.545124 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-sys\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.545145 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7354f89-1113-43f0-b654-a4222ee05faf-lib-modules\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.546536 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.546933 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.550009 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7354f89-1113-43f0-b654-a4222ee05faf-scripts\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.566802 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45jx\" (UniqueName: \"kubernetes.io/projected/f7354f89-1113-43f0-b654-a4222ee05faf-kube-api-access-d45jx\") pod \"cinder-backup-0\" (UID: \"f7354f89-1113-43f0-b654-a4222ee05faf\") " pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.649768 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.701040 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.897666 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.899363 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.912045 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.947395 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:32 crc kubenswrapper[4949]: I0120 15:40:32.947445 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.014666 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.016378 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.019049 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.022601 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.049188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.049278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.050100 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.050149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.050198 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.075468 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"manila-db-create-6d468\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.096602 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.102710 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.105270 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.105270 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.107186 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.107393 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-csksn" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.117421 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.149394 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.150882 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152421 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx47j\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-kube-api-access-bx47j\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152453 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.152511 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-ceph\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153348 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153378 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-logs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153514 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153123 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.153901 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.154093 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.166714 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.172584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"manila-176a-account-create-update-gqg2s\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.222510 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254637 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-logs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254695 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254762 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254791 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254817 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254837 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzkq\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-kube-api-access-vzzkq\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254856 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254941 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx47j\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-kube-api-access-bx47j\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254960 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.254986 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255052 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-ceph\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255071 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-logs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.255571 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-logs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.258581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.258813 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.296254 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e7cb37b-debf-462c-8a81-81ce79da0ee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.299300 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-ceph\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.300611 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.302081 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.303372 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.313003 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cb37b-debf-462c-8a81-81ce79da0ee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.317603 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx47j\" (UniqueName: \"kubernetes.io/projected/2e7cb37b-debf-462c-8a81-81ce79da0ee9-kube-api-access-bx47j\") pod \"glance-default-external-api-0\" (UID: \"2e7cb37b-debf-462c-8a81-81ce79da0ee9\") " pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.338612 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.347352 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356713 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-logs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356820 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356878 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.356905 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357114 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357179 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzkq\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-kube-api-access-vzzkq\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357275 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357337 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.357846 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.360405 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-logs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.360688 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80681a49-f9f1-4208-a90e-77c74cc6860d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.362925 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.366374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.371948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.372324 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80681a49-f9f1-4208-a90e-77c74cc6860d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.380050 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzkq\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-kube-api-access-vzzkq\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.391010 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80681a49-f9f1-4208-a90e-77c74cc6860d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.397909 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"80681a49-f9f1-4208-a90e-77c74cc6860d\") " pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.426468 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.449509 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.478417 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.725904 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:40:33 crc kubenswrapper[4949]: I0120 15:40:33.946732 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.292836 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.332402 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerStarted","Data":"e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.332455 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerStarted","Data":"d60dc856d9dfa70c5fe4c448552f28df17c2b075c6d00e4a4b05f54ec8cd0abe"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.335399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e7cb37b-debf-462c-8a81-81ce79da0ee9","Type":"ContainerStarted","Data":"72224a88b64e3392b48bd04c97394de0c63f9a71c2f3236e2ee7a8db0ad4a025"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.338172 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"83382677-6882-49eb-a111-498346e2d6dc","Type":"ContainerStarted","Data":"4fc05c0ec4431d2066a8ab606bbb4842c66bc62ce9e8a5d896279b55d996da16"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.340471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerStarted","Data":"7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.340710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerStarted","Data":"9ebe1721004a5e563ee2553c9b917f474caee423c1a76e213022afdf538240f6"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.343873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f7354f89-1113-43f0-b654-a4222ee05faf","Type":"ContainerStarted","Data":"ac9775da49df14a99eb1fa58155b73adcb489ea43bb3acd535078796dd99950a"} Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.358739 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-176a-account-create-update-gqg2s" podStartSLOduration=2.358713705 podStartE2EDuration="2.358713705s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:34.350825637 +0000 UTC m=+3030.160656515" watchObservedRunningTime="2026-01-20 15:40:34.358713705 +0000 UTC m=+3030.168544593" Jan 20 15:40:34 crc kubenswrapper[4949]: I0120 15:40:34.376723 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-6d468" podStartSLOduration=2.376700683 podStartE2EDuration="2.376700683s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:34.364120067 +0000 UTC m=+3030.173950945" watchObservedRunningTime="2026-01-20 15:40:34.376700683 +0000 UTC m=+3030.186531541" Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.361653 4949 generic.go:334] "Generic (PLEG): container finished" podID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerID="e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5" exitCode=0 Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.361767 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerDied","Data":"e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.367686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e7cb37b-debf-462c-8a81-81ce79da0ee9","Type":"ContainerStarted","Data":"355de920866dafa64746511b96c0f241a02c39500999bdc7193f65335734bfa4"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.371009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"83382677-6882-49eb-a111-498346e2d6dc","Type":"ContainerStarted","Data":"6513cd7c8e5584e538704be3216cc31257fe1b6e202bd58db26e03a398e1a5ab"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.371088 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"83382677-6882-49eb-a111-498346e2d6dc","Type":"ContainerStarted","Data":"7bca075afa807a2a8891bc8c183c2316f637a4a481f57d1df51d0783fe02b3e2"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.381722 4949 generic.go:334] "Generic (PLEG): container finished" podID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerID="7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c" exitCode=0 Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.382117 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerDied","Data":"7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.388412 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.393362 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f7354f89-1113-43f0-b654-a4222ee05faf","Type":"ContainerStarted","Data":"939130e52640d57727d0c2eb545d6490119076149220b0697b640a9863781cf1"} Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.393405 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f7354f89-1113-43f0-b654-a4222ee05faf","Type":"ContainerStarted","Data":"90c01e98f19a66d691dfb325988c77e8df1030ac9826f89d9476077423e59403"} Jan 20 15:40:35 crc kubenswrapper[4949]: W0120 15:40:35.400352 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80681a49_f9f1_4208_a90e_77c74cc6860d.slice/crio-e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6 WatchSource:0}: Error finding container e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6: Status 404 returned error can't find the container with id e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6 Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.422837 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.06268637 podStartE2EDuration="3.422774959s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="2026-01-20 15:40:33.368887422 +0000 UTC m=+3029.178718280" lastFinishedPulling="2026-01-20 15:40:34.728976011 +0000 UTC m=+3030.538806869" observedRunningTime="2026-01-20 15:40:35.413489997 +0000 UTC m=+3031.223320875" watchObservedRunningTime="2026-01-20 15:40:35.422774959 +0000 UTC m=+3031.232605827" Jan 20 15:40:35 crc kubenswrapper[4949]: I0120 15:40:35.462212 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.201578295 podStartE2EDuration="3.46219409s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="2026-01-20 15:40:33.46278202 +0000 UTC m=+3029.272612878" lastFinishedPulling="2026-01-20 15:40:34.723397815 +0000 UTC m=+3030.533228673" observedRunningTime="2026-01-20 15:40:35.448122997 +0000 UTC m=+3031.257953855" watchObservedRunningTime="2026-01-20 15:40:35.46219409 +0000 UTC m=+3031.272024948" Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.408483 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80681a49-f9f1-4208-a90e-77c74cc6860d","Type":"ContainerStarted","Data":"3100aacb59a194664bbac69672b3d4af326651f3f2d4579ba752f2ad350282e0"} Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.409136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80681a49-f9f1-4208-a90e-77c74cc6860d","Type":"ContainerStarted","Data":"e343e8baca7dd3202fa977307e3bda2d9eb0c27f4395616db6d238aff14defe6"} Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.413395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2e7cb37b-debf-462c-8a81-81ce79da0ee9","Type":"ContainerStarted","Data":"17830b243d2c3cc0eb092a96ea4eda4ef8130e7928fdf461f04f2a71a34b96bf"} Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.930159 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.937899 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:36 crc kubenswrapper[4949]: I0120 15:40:36.948431 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.948408163 podStartE2EDuration="4.948408163s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:36.453510082 +0000 UTC m=+3032.263340940" watchObservedRunningTime="2026-01-20 15:40:36.948408163 +0000 UTC m=+3032.758239021" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.069793 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") pod \"c1f501b4-e612-41a4-aef2-fdaf166aa018\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.069960 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") pod \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.070033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") pod \"c1f501b4-e612-41a4-aef2-fdaf166aa018\" (UID: \"c1f501b4-e612-41a4-aef2-fdaf166aa018\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.070114 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") pod \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\" (UID: \"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d\") " Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.070763 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1f501b4-e612-41a4-aef2-fdaf166aa018" (UID: "c1f501b4-e612-41a4-aef2-fdaf166aa018"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.071509 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" (UID: "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.079186 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g" (OuterVolumeSpecName: "kube-api-access-wv42g") pod "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" (UID: "92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d"). InnerVolumeSpecName "kube-api-access-wv42g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.082666 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w" (OuterVolumeSpecName: "kube-api-access-t9q9w") pod "c1f501b4-e612-41a4-aef2-fdaf166aa018" (UID: "c1f501b4-e612-41a4-aef2-fdaf166aa018"). InnerVolumeSpecName "kube-api-access-t9q9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173177 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f501b4-e612-41a4-aef2-fdaf166aa018-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173220 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv42g\" (UniqueName: \"kubernetes.io/projected/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-kube-api-access-wv42g\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173235 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9q9w\" (UniqueName: \"kubernetes.io/projected/c1f501b4-e612-41a4-aef2-fdaf166aa018-kube-api-access-t9q9w\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.173249 4949 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.423089 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-6d468" event={"ID":"c1f501b4-e612-41a4-aef2-fdaf166aa018","Type":"ContainerDied","Data":"9ebe1721004a5e563ee2553c9b917f474caee423c1a76e213022afdf538240f6"} Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.423137 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ebe1721004a5e563ee2553c9b917f474caee423c1a76e213022afdf538240f6" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.423141 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-6d468" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.427195 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"80681a49-f9f1-4208-a90e-77c74cc6860d","Type":"ContainerStarted","Data":"1ccfa6cce9202c4b0c4e30b10d09699e1f66ac03a630bbe9f9fe0aa288fdea1f"} Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.428980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-176a-account-create-update-gqg2s" event={"ID":"92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d","Type":"ContainerDied","Data":"d60dc856d9dfa70c5fe4c448552f28df17c2b075c6d00e4a4b05f54ec8cd0abe"} Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.429037 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60dc856d9dfa70c5fe4c448552f28df17c2b075c6d00e4a4b05f54ec8cd0abe" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.428996 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-176a-account-create-update-gqg2s" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.452997 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.452974869 podStartE2EDuration="5.452974869s" podCreationTimestamp="2026-01-20 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:40:37.447006951 +0000 UTC m=+3033.256837809" watchObservedRunningTime="2026-01-20 15:40:37.452974869 +0000 UTC m=+3033.262805727" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.651221 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:37 crc kubenswrapper[4949]: I0120 15:40:37.702131 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.371243 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:40:38 crc kubenswrapper[4949]: E0120 15:40:38.371854 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerName="mariadb-database-create" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.371882 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerName="mariadb-database-create" Jan 20 15:40:38 crc kubenswrapper[4949]: E0120 15:40:38.371910 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerName="mariadb-account-create-update" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.371921 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerName="mariadb-account-create-update" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.372162 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" containerName="mariadb-database-create" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.372207 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" containerName="mariadb-account-create-update" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.373069 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.379721 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.383490 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p7v5q" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.402653 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.495871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.495936 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.495967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.496095 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.597712 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.597783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.597988 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.598068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.604581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.605240 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.608580 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.620277 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"manila-db-sync-q7rxq\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:38 crc kubenswrapper[4949]: I0120 15:40:38.690069 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:39 crc kubenswrapper[4949]: I0120 15:40:39.288953 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:40:39 crc kubenswrapper[4949]: I0120 15:40:39.457732 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerStarted","Data":"89e14acf3507cacfb29458f2d7e350450d3744023db41d932b820c823592f772"} Jan 20 15:40:40 crc kubenswrapper[4949]: I0120 15:40:40.790947 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:40 crc kubenswrapper[4949]: E0120 15:40:40.791925 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:42 crc kubenswrapper[4949]: I0120 15:40:42.866684 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 20 15:40:42 crc kubenswrapper[4949]: I0120 15:40:42.993336 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.426905 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.426970 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.470895 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.473019 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.484194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.484240 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.495401 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.495465 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.535204 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.535898 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:43 crc kubenswrapper[4949]: I0120 15:40:43.548831 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:44 crc kubenswrapper[4949]: I0120 15:40:44.508474 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.517996 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.517980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerStarted","Data":"d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be"} Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.553624 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-q7rxq" podStartSLOduration=2.689683811 podStartE2EDuration="7.553602817s" podCreationTimestamp="2026-01-20 15:40:38 +0000 UTC" firstStartedPulling="2026-01-20 15:40:39.299873756 +0000 UTC m=+3035.109704634" lastFinishedPulling="2026-01-20 15:40:44.163792782 +0000 UTC m=+3039.973623640" observedRunningTime="2026-01-20 15:40:45.551325626 +0000 UTC m=+3041.361156484" watchObservedRunningTime="2026-01-20 15:40:45.553602817 +0000 UTC m=+3041.363433675" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.666083 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.666168 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.834548 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.944573 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:45 crc kubenswrapper[4949]: I0120 15:40:45.945228 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 20 15:40:54 crc kubenswrapper[4949]: I0120 15:40:54.789235 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:40:54 crc kubenswrapper[4949]: E0120 15:40:54.790081 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:40:55 crc kubenswrapper[4949]: I0120 15:40:55.615130 4949 generic.go:334] "Generic (PLEG): container finished" podID="1501061b-c734-43b8-8f88-0d895789e209" containerID="d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be" exitCode=0 Jan 20 15:40:55 crc kubenswrapper[4949]: I0120 15:40:55.615196 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerDied","Data":"d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be"} Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.042878 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196063 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196238 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196392 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.196498 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") pod \"1501061b-c734-43b8-8f88-0d895789e209\" (UID: \"1501061b-c734-43b8-8f88-0d895789e209\") " Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.202455 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k" (OuterVolumeSpecName: "kube-api-access-q9n7k") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "kube-api-access-q9n7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.205424 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data" (OuterVolumeSpecName: "config-data") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.206592 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.228446 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1501061b-c734-43b8-8f88-0d895789e209" (UID: "1501061b-c734-43b8-8f88-0d895789e209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298668 4949 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298722 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298736 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9n7k\" (UniqueName: \"kubernetes.io/projected/1501061b-c734-43b8-8f88-0d895789e209-kube-api-access-q9n7k\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.298748 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1501061b-c734-43b8-8f88-0d895789e209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.632860 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-q7rxq" event={"ID":"1501061b-c734-43b8-8f88-0d895789e209","Type":"ContainerDied","Data":"89e14acf3507cacfb29458f2d7e350450d3744023db41d932b820c823592f772"} Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.632907 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e14acf3507cacfb29458f2d7e350450d3744023db41d932b820c823592f772" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.632914 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-q7rxq" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.889126 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:40:57 crc kubenswrapper[4949]: E0120 15:40:57.889934 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1501061b-c734-43b8-8f88-0d895789e209" containerName="manila-db-sync" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.889958 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1501061b-c734-43b8-8f88-0d895789e209" containerName="manila-db-sync" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.890181 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1501061b-c734-43b8-8f88-0d895789e209" containerName="manila-db-sync" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.891386 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.897730 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.898265 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-p7v5q" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.898357 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.905496 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.905970 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.989822 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.991927 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:40:57 crc kubenswrapper[4949]: I0120 15:40:57.994832 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.016708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.016899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017022 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017130 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017291 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017345 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017385 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.017411 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.062571 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.119420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.119804 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.119927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120129 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120219 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120299 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120371 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120464 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120574 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120685 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.120867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.121046 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.121194 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.123780 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.132556 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.143252 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.143436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.143893 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.150191 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.152383 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hf624"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.154662 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.160384 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"manila-share-share1-0\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.165160 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hf624"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.220206 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222676 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222823 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222844 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222923 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-config\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.222984 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223070 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223103 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.223198 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh97c\" (UniqueName: \"kubernetes.io/projected/d723357a-5423-49c3-9263-ff768f28745f-kube-api-access-bh97c\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.227311 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.228254 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.228730 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.250933 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.259739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"manila-scheduler-0\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.262563 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.265660 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.267620 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.311289 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324811 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324850 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh97c\" (UniqueName: \"kubernetes.io/projected/d723357a-5423-49c3-9263-ff768f28745f-kube-api-access-bh97c\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-config\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.324972 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.325036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.325058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.328305 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.328963 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.329991 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-config\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.330483 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.331003 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d723357a-5423-49c3-9263-ff768f28745f-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.349113 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.360224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh97c\" (UniqueName: \"kubernetes.io/projected/d723357a-5423-49c3-9263-ff768f28745f-kube-api-access-bh97c\") pod \"dnsmasq-dns-69655fd4bf-hf624\" (UID: \"d723357a-5423-49c3-9263-ff768f28745f\") " pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426845 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.426931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.427588 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.427680 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.427738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531498 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531811 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531907 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.531957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.532018 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.532149 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.537119 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.537453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.538181 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.549992 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.550705 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.552841 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"manila-api-0\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.656252 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.674962 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:40:58 crc kubenswrapper[4949]: I0120 15:40:58.811630 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.045031 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.318240 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-hf624"] Jan 20 15:40:59 crc kubenswrapper[4949]: W0120 15:40:59.393120 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a5ca1bf_b5a9_49cb_aacd_6d9ac032a888.slice/crio-d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf WatchSource:0}: Error finding container d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf: Status 404 returned error can't find the container with id d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.393494 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.659857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerStarted","Data":"60e7d4a8ac8e4a2e8bf0ecf78ff8e8d81b95e8b85f820ad01959c6f7e2278fab"} Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.660809 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerStarted","Data":"d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf"} Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.661710 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" event={"ID":"d723357a-5423-49c3-9263-ff768f28745f","Type":"ContainerStarted","Data":"ea99d43cd7edf20d061752ac24599747f7f68498741c299e4acc055102c8398a"} Jan 20 15:40:59 crc kubenswrapper[4949]: I0120 15:40:59.666564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerStarted","Data":"5d5d321f43f48229f54657ca4759751495b89237e7296cf027757d58bd32dcaa"} Jan 20 15:41:00 crc kubenswrapper[4949]: I0120 15:41:00.679505 4949 generic.go:334] "Generic (PLEG): container finished" podID="d723357a-5423-49c3-9263-ff768f28745f" containerID="89c6ab7833515623f918af75e4868a7557c8c949f4c4ee87ed24783becaf2be7" exitCode=0 Jan 20 15:41:00 crc kubenswrapper[4949]: I0120 15:41:00.679895 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" event={"ID":"d723357a-5423-49c3-9263-ff768f28745f","Type":"ContainerDied","Data":"89c6ab7833515623f918af75e4868a7557c8c949f4c4ee87ed24783becaf2be7"} Jan 20 15:41:00 crc kubenswrapper[4949]: I0120 15:41:00.686996 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerStarted","Data":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.095178 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.697614 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerStarted","Data":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.697987 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerStarted","Data":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.700875 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" event={"ID":"d723357a-5423-49c3-9263-ff768f28745f","Type":"ContainerStarted","Data":"69af6dca3592219f9a1f54d88e96dc5462f2f62fba6843bf2b3fb9d68f5af10c"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.701469 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.704238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerStarted","Data":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.704909 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.716131 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.395689262 podStartE2EDuration="4.716109231s" podCreationTimestamp="2026-01-20 15:40:57 +0000 UTC" firstStartedPulling="2026-01-20 15:40:58.826861878 +0000 UTC m=+3054.636692726" lastFinishedPulling="2026-01-20 15:41:00.147281837 +0000 UTC m=+3055.957112695" observedRunningTime="2026-01-20 15:41:01.713022205 +0000 UTC m=+3057.522853053" watchObservedRunningTime="2026-01-20 15:41:01.716109231 +0000 UTC m=+3057.525940089" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.755758 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" podStartSLOduration=3.75573326 podStartE2EDuration="3.75573326s" podCreationTimestamp="2026-01-20 15:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:01.739505639 +0000 UTC m=+3057.549336487" watchObservedRunningTime="2026-01-20 15:41:01.75573326 +0000 UTC m=+3057.565564128" Jan 20 15:41:01 crc kubenswrapper[4949]: I0120 15:41:01.762010 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.761991607 podStartE2EDuration="3.761991607s" podCreationTimestamp="2026-01-20 15:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:01.756908497 +0000 UTC m=+3057.566739355" watchObservedRunningTime="2026-01-20 15:41:01.761991607 +0000 UTC m=+3057.571822465" Jan 20 15:41:02 crc kubenswrapper[4949]: I0120 15:41:02.715571 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" containerID="cri-o://332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" gracePeriod=30 Jan 20 15:41:02 crc kubenswrapper[4949]: I0120 15:41:02.715599 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" containerID="cri-o://5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" gracePeriod=30 Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.551690 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672077 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672140 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672236 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672377 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672408 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.672447 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") pod \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\" (UID: \"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888\") " Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.673233 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs" (OuterVolumeSpecName: "logs") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.674749 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.683154 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.683266 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts" (OuterVolumeSpecName: "scripts") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.687744 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4" (OuterVolumeSpecName: "kube-api-access-2rfm4") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "kube-api-access-2rfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.716468 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.727667 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" exitCode=0 Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.727710 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" exitCode=143 Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728100 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728395 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerDied","Data":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728440 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerDied","Data":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728456 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888","Type":"ContainerDied","Data":"d42bc87bc97fd1b16dff21fa9cb1ad72123ff45864348a65bd2af0b78716ecdf"} Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.728475 4949 scope.go:117] "RemoveContainer" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.735533 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data" (OuterVolumeSpecName: "config-data") pod "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" (UID: "3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.767312 4949 scope.go:117] "RemoveContainer" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774613 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774649 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfm4\" (UniqueName: \"kubernetes.io/projected/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-kube-api-access-2rfm4\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774660 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774668 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774675 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774683 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-logs\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.774691 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.796538 4949 scope.go:117] "RemoveContainer" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: E0120 15:41:03.796999 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": container with ID starting with 5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039 not found: ID does not exist" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797130 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} err="failed to get container status \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": rpc error: code = NotFound desc = could not find container \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": container with ID starting with 5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039 not found: ID does not exist" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797241 4949 scope.go:117] "RemoveContainer" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: E0120 15:41:03.797603 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": container with ID starting with 332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b not found: ID does not exist" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797725 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} err="failed to get container status \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": rpc error: code = NotFound desc = could not find container \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": container with ID starting with 332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b not found: ID does not exist" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.797830 4949 scope.go:117] "RemoveContainer" containerID="5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.798159 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039"} err="failed to get container status \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": rpc error: code = NotFound desc = could not find container \"5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039\": container with ID starting with 5fd398ed07cd637bc30a95538d1959f54782f8a2f24f7f4636288dd2a1f6b039 not found: ID does not exist" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.798183 4949 scope.go:117] "RemoveContainer" containerID="332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b" Jan 20 15:41:03 crc kubenswrapper[4949]: I0120 15:41:03.798394 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b"} err="failed to get container status \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": rpc error: code = NotFound desc = could not find container \"332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b\": container with ID starting with 332bc30ba66b045f72945d93725d70e262663a4f4a04b60abc621bac06dae84b not found: ID does not exist" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.060863 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.077257 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.085957 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: E0120 15:41:04.086619 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.086698 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" Jan 20 15:41:04 crc kubenswrapper[4949]: E0120 15:41:04.086779 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.086830 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.087078 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api-log" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.087147 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" containerName="manila-api" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.088102 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.090434 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.090641 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.090775 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.115243 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182106 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-etc-machine-id\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-logs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182228 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np85p\" (UniqueName: \"kubernetes.io/projected/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-kube-api-access-np85p\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182266 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182299 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-scripts\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182329 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-public-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data-custom\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.182588 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284795 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np85p\" (UniqueName: \"kubernetes.io/projected/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-kube-api-access-np85p\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-scripts\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284904 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-public-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.284961 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data-custom\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285081 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-etc-machine-id\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-logs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.285550 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-logs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.286378 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-etc-machine-id\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.289590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-public-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.290422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.291486 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.302106 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-config-data-custom\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.302614 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.303093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-scripts\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.306236 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np85p\" (UniqueName: \"kubernetes.io/projected/9d247f3c-18c5-4045-a6a5-e25dc78c33ee-kube-api-access-np85p\") pod \"manila-api-0\" (UID: \"9d247f3c-18c5-4045-a6a5-e25dc78c33ee\") " pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.406217 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.806114 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888" path="/var/lib/kubelet/pods/3a5ca1bf-b5a9-49cb-aacd-6d9ac032a888/volumes" Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945089 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945341 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" containerID="cri-o://30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1" gracePeriod=30 Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945451 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" containerID="cri-o://423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6" gracePeriod=30 Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945486 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" containerID="cri-o://48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a" gracePeriod=30 Jan 20 15:41:04 crc kubenswrapper[4949]: I0120 15:41:04.945535 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" containerID="cri-o://f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e" gracePeriod=30 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.071147 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780547 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6" exitCode=0 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780577 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a" exitCode=2 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780585 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e" exitCode=0 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780593 4949 generic.go:334] "Generic (PLEG): container finished" podID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerID="30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1" exitCode=0 Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780613 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6"} Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780643 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a"} Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780656 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e"} Jan 20 15:41:05 crc kubenswrapper[4949]: I0120 15:41:05.780665 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1"} Jan 20 15:41:07 crc kubenswrapper[4949]: I0120 15:41:07.788873 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:07 crc kubenswrapper[4949]: E0120 15:41:07.789878 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.349985 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.657809 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-hf624" Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.737243 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.737512 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" containerID="cri-o://632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" gracePeriod=10 Jan 20 15:41:08 crc kubenswrapper[4949]: W0120 15:41:08.783617 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d247f3c_18c5_4045_a6a5_e25dc78c33ee.slice/crio-fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309 WatchSource:0}: Error finding container fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309: Status 404 returned error can't find the container with id fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309 Jan 20 15:41:08 crc kubenswrapper[4949]: I0120 15:41:08.818475 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9d247f3c-18c5-4045-a6a5-e25dc78c33ee","Type":"ContainerStarted","Data":"fd7746badb8c3f8110c1d539269f7b9ff8b39c9b0907d6ddd96d53f718eb2309"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.243021 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.277236 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.411405 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.411470 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.412105 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414058 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414149 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414297 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414411 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414468 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414500 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414706 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414762 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414784 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") pod \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\" (UID: \"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.414823 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") pod \"4108fe7d-5c92-44fa-ad65-bfaee526f439\" (UID: \"4108fe7d-5c92-44fa-ad65-bfaee526f439\") " Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.415999 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.421662 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.455793 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts" (OuterVolumeSpecName: "scripts") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.462721 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8" (OuterVolumeSpecName: "kube-api-access-99zf8") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "kube-api-access-99zf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.468857 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22" (OuterVolumeSpecName: "kube-api-access-d8g22") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "kube-api-access-d8g22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.522942 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99zf8\" (UniqueName: \"kubernetes.io/projected/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-kube-api-access-99zf8\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.523234 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4108fe7d-5c92-44fa-ad65-bfaee526f439-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.523243 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.523251 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8g22\" (UniqueName: \"kubernetes.io/projected/4108fe7d-5c92-44fa-ad65-bfaee526f439-kube-api-access-d8g22\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.573796 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.627896 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.749342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.757674 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.766751 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config" (OuterVolumeSpecName: "config") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.788236 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.795143 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.807432 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" (UID: "d5fd960d-ae25-4d53-bf2e-c952c18f5c4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.826764 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.829969 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831229 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4108fe7d-5c92-44fa-ad65-bfaee526f439","Type":"ContainerDied","Data":"8f0dd94a9e63de42a5122bf4ccb941587cc9b12585cbfa4f431123811ef49ec3"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831277 4949 scope.go:117] "RemoveContainer" containerID="423b474333638a1bfcf75f7528860a3c851cfaf241381512b11005378808c8e6" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831868 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831896 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831907 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831915 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831926 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831934 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-config\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.831942 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.835531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9d247f3c-18c5-4045-a6a5-e25dc78c33ee","Type":"ContainerStarted","Data":"702761acc29d2479b8a0e2c5fc083db526a0b9260e3b6e0d6941a0b424d45019"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837262 4949 generic.go:334] "Generic (PLEG): container finished" podID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" exitCode=0 Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerDied","Data":"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837331 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" event={"ID":"d5fd960d-ae25-4d53-bf2e-c952c18f5c4e","Type":"ContainerDied","Data":"28ea64f9b5c04147b3009b47c17f0604ad8b61b4a2b278a6f07f0801d7c72f92"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.837380 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-tm44w" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.842061 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerStarted","Data":"a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795"} Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.845947 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data" (OuterVolumeSpecName: "config-data") pod "4108fe7d-5c92-44fa-ad65-bfaee526f439" (UID: "4108fe7d-5c92-44fa-ad65-bfaee526f439"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.933839 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108fe7d-5c92-44fa-ad65-bfaee526f439-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:09 crc kubenswrapper[4949]: I0120 15:41:09.994022 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.005533 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-tm44w"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.011585 4949 scope.go:117] "RemoveContainer" containerID="48c620dafb593ecafb6153f185eb283889831c8ba9d4aa7c0be05251a937113a" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.057860 4949 scope.go:117] "RemoveContainer" containerID="f1b430003696be45173c7e9d47dbbc1372f613e57f0cc17f5033b3b5852aa99e" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.082113 4949 scope.go:117] "RemoveContainer" containerID="30b9afdb670de41ce14d1aeae910ae1bc6997a01136b9a37eda2ada7a92252e1" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.121332 4949 scope.go:117] "RemoveContainer" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.152788 4949 scope.go:117] "RemoveContainer" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.185317 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.197160 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.216642 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217069 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="init" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217086 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="init" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217108 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217114 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217127 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217133 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217147 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217153 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217170 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217177 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.217190 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217196 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217362 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="proxy-httpd" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217375 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-notification-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217382 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="ceilometer-central-agent" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217394 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" containerName="dnsmasq-dns" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.217407 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" containerName="sg-core" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.219177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.226470 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.226695 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.226888 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.230980 4949 scope.go:117] "RemoveContainer" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.232804 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e\": container with ID starting with 632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e not found: ID does not exist" containerID="632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.232855 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e"} err="failed to get container status \"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e\": rpc error: code = NotFound desc = could not find container \"632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e\": container with ID starting with 632aa3a087ad79053134f553b8ec1655e470a5457db699d67dbe54282798129e not found: ID does not exist" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.232886 4949 scope.go:117] "RemoveContainer" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" Jan 20 15:41:10 crc kubenswrapper[4949]: E0120 15:41:10.233303 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854\": container with ID starting with 8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854 not found: ID does not exist" containerID="8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.233331 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854"} err="failed to get container status \"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854\": rpc error: code = NotFound desc = could not find container \"8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854\": container with ID starting with 8bcbea75d8416203585be76783349c29f14a254e2ef38f696fd98aa623455854 not found: ID does not exist" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.246015 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345267 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345459 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345546 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345651 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345679 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.345741 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448076 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448126 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.448268 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.449443 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.453956 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.454075 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.454076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.454663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.455705 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.464621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.466649 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"ceilometer-0\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.615820 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.819226 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4108fe7d-5c92-44fa-ad65-bfaee526f439" path="/var/lib/kubelet/pods/4108fe7d-5c92-44fa-ad65-bfaee526f439/volumes" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.820834 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5fd960d-ae25-4d53-bf2e-c952c18f5c4e" path="/var/lib/kubelet/pods/d5fd960d-ae25-4d53-bf2e-c952c18f5c4e/volumes" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.860078 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9d247f3c-18c5-4045-a6a5-e25dc78c33ee","Type":"ContainerStarted","Data":"ba81347e3e0d091efdb765becf9f0428cb5277bb150959039c96553feb449aa4"} Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.861332 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.866857 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerStarted","Data":"317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86"} Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.905312 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.905285173 podStartE2EDuration="6.905285173s" podCreationTimestamp="2026-01-20 15:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:10.891687555 +0000 UTC m=+3066.701518413" watchObservedRunningTime="2026-01-20 15:41:10.905285173 +0000 UTC m=+3066.715116061" Jan 20 15:41:10 crc kubenswrapper[4949]: I0120 15:41:10.932148 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.129244864 podStartE2EDuration="13.93212902s" podCreationTimestamp="2026-01-20 15:40:57 +0000 UTC" firstStartedPulling="2026-01-20 15:40:59.054564311 +0000 UTC m=+3054.864395169" lastFinishedPulling="2026-01-20 15:41:08.857448467 +0000 UTC m=+3064.667279325" observedRunningTime="2026-01-20 15:41:10.928305519 +0000 UTC m=+3066.738136377" watchObservedRunningTime="2026-01-20 15:41:10.93212902 +0000 UTC m=+3066.741959878" Jan 20 15:41:11 crc kubenswrapper[4949]: I0120 15:41:11.193994 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:11 crc kubenswrapper[4949]: I0120 15:41:11.886202 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"90ddd88b841bcb897f9b9f285d3c118a9a69ce0fe7d69cd564aa22f05881cee8"} Jan 20 15:41:12 crc kubenswrapper[4949]: I0120 15:41:12.085735 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:12 crc kubenswrapper[4949]: I0120 15:41:12.896204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6"} Jan 20 15:41:13 crc kubenswrapper[4949]: I0120 15:41:13.907723 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2"} Jan 20 15:41:14 crc kubenswrapper[4949]: I0120 15:41:14.918175 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b"} Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.945982 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerStarted","Data":"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551"} Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946575 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946460 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" containerID="cri-o://9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946115 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" containerID="cri-o://dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946489 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" containerID="cri-o://60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.946476 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" containerID="cri-o://635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" gracePeriod=30 Jan 20 15:41:17 crc kubenswrapper[4949]: I0120 15:41:17.973017 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.620784076 podStartE2EDuration="7.97299932s" podCreationTimestamp="2026-01-20 15:41:10 +0000 UTC" firstStartedPulling="2026-01-20 15:41:11.197120108 +0000 UTC m=+3067.006950956" lastFinishedPulling="2026-01-20 15:41:17.549335342 +0000 UTC m=+3073.359166200" observedRunningTime="2026-01-20 15:41:17.971928996 +0000 UTC m=+3073.781759874" watchObservedRunningTime="2026-01-20 15:41:17.97299932 +0000 UTC m=+3073.782830188" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.220822 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.619278 4949 scope.go:117] "RemoveContainer" containerID="76a5595e5cd26fffaa0ceb9cde98dcd008151ee5eae0290b95de7858f3eded5f" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.646140 4949 scope.go:117] "RemoveContainer" containerID="0797e6f2e6fb98ae6289f5e7341361f0a326a8f7cd1e206437d9e0f5fc70ee25" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.678008 4949 scope.go:117] "RemoveContainer" containerID="b25eb601db495762aa2c1dce730d0bc786cef26614edc2df7ad8c09198618acd" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.789146 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:18 crc kubenswrapper[4949]: E0120 15:41:18.789495 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.958175 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" exitCode=0 Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959342 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" exitCode=2 Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959416 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" exitCode=0 Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551"} Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b"} Jan 20 15:41:18 crc kubenswrapper[4949]: I0120 15:41:18.959752 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2"} Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.910739 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.964676 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.967697 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" containerID="cri-o://ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" gracePeriod=30 Jan 20 15:41:19 crc kubenswrapper[4949]: I0120 15:41:19.967767 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" containerID="cri-o://6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" gracePeriod=30 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.635573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.759896 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760011 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760178 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760249 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760297 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760427 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760491 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.760551 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") pod \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\" (UID: \"b17688bb-6e3e-4b48-bffa-bf1383aa47a1\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761092 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761409 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761928 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.761951 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.765052 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts" (OuterVolumeSpecName: "scripts") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.766596 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx" (OuterVolumeSpecName: "kube-api-access-wgnrx") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "kube-api-access-wgnrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.808583 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.825448 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.843474 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864888 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864917 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgnrx\" (UniqueName: \"kubernetes.io/projected/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-kube-api-access-wgnrx\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864927 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.864943 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.868569 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.930357 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data" (OuterVolumeSpecName: "config-data") pod "b17688bb-6e3e-4b48-bffa-bf1383aa47a1" (UID: "b17688bb-6e3e-4b48-bffa-bf1383aa47a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967081 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967504 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967588 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967718 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.967736 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") pod \"8ed776ab-5efa-46df-b070-54de4042b64e\" (UID: \"8ed776ab-5efa-46df-b070-54de4042b64e\") " Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.968408 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.969048 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.969073 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed776ab-5efa-46df-b070-54de4042b64e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.969089 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b17688bb-6e3e-4b48-bffa-bf1383aa47a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.974649 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.974755 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts" (OuterVolumeSpecName: "scripts") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.975954 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6" (OuterVolumeSpecName: "kube-api-access-m8dq6") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "kube-api-access-m8dq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981055 4949 generic.go:334] "Generic (PLEG): container finished" podID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" exitCode=0 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981124 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981149 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981182 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b17688bb-6e3e-4b48-bffa-bf1383aa47a1","Type":"ContainerDied","Data":"90ddd88b841bcb897f9b9f285d3c118a9a69ce0fe7d69cd564aa22f05881cee8"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.981206 4949 scope.go:117] "RemoveContainer" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985049 4949 generic.go:334] "Generic (PLEG): container finished" podID="8ed776ab-5efa-46df-b070-54de4042b64e" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" exitCode=0 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985079 4949 generic.go:334] "Generic (PLEG): container finished" podID="8ed776ab-5efa-46df-b070-54de4042b64e" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" exitCode=0 Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerDied","Data":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985123 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerDied","Data":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985146 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8ed776ab-5efa-46df-b070-54de4042b64e","Type":"ContainerDied","Data":"5d5d321f43f48229f54657ca4759751495b89237e7296cf027757d58bd32dcaa"} Jan 20 15:41:20 crc kubenswrapper[4949]: I0120 15:41:20.985222 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.017756 4949 scope.go:117] "RemoveContainer" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.018138 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.026366 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.041735 4949 scope.go:117] "RemoveContainer" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.046817 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047245 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047310 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047374 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047436 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047494 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047562 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047617 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047666 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.047732 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.047873 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.048064 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048130 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048449 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="proxy-httpd" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048600 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="manila-scheduler" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048677 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="sg-core" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048750 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" containerName="probe" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048810 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-notification-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.048915 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" containerName="ceilometer-central-agent" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.052635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.059730 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.062419 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.063466 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.062696 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072267 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072361 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dq6\" (UniqueName: \"kubernetes.io/projected/8ed776ab-5efa-46df-b070-54de4042b64e-kube-api-access-m8dq6\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072443 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.072295 4949 scope.go:117] "RemoveContainer" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.108934 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.109508 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data" (OuterVolumeSpecName: "config-data") pod "8ed776ab-5efa-46df-b070-54de4042b64e" (UID: "8ed776ab-5efa-46df-b070-54de4042b64e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117174 4949 scope.go:117] "RemoveContainer" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.117621 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551\": container with ID starting with 9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551 not found: ID does not exist" containerID="9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117660 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551"} err="failed to get container status \"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551\": rpc error: code = NotFound desc = could not find container \"9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551\": container with ID starting with 9cef419d808f6867370227a14021b9ac8c22def8c52064348c6507b76d0b8551 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117691 4949 scope.go:117] "RemoveContainer" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.117940 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b\": container with ID starting with 635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b not found: ID does not exist" containerID="635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117969 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b"} err="failed to get container status \"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b\": rpc error: code = NotFound desc = could not find container \"635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b\": container with ID starting with 635996e498c050fae06f067728223f7172c5253695e59f7b878294d2842bc56b not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.117988 4949 scope.go:117] "RemoveContainer" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.118260 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2\": container with ID starting with 60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2 not found: ID does not exist" containerID="60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118304 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2"} err="failed to get container status \"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2\": rpc error: code = NotFound desc = could not find container \"60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2\": container with ID starting with 60c38b9dc34dfaebe0e484b15c86e63aa398d930dbcfbb8a3349762e673fb8c2 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118321 4949 scope.go:117] "RemoveContainer" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.118769 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6\": container with ID starting with dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6 not found: ID does not exist" containerID="dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118803 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6"} err="failed to get container status \"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6\": rpc error: code = NotFound desc = could not find container \"dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6\": container with ID starting with dd0c39963d74f71ce1a81d1b2713ac025341051412c2b903e8917c8543795cb6 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.118816 4949 scope.go:117] "RemoveContainer" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.141796 4949 scope.go:117] "RemoveContainer" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.160583 4949 scope.go:117] "RemoveContainer" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.160958 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": container with ID starting with 6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40 not found: ID does not exist" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.160986 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} err="failed to get container status \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": rpc error: code = NotFound desc = could not find container \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": container with ID starting with 6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161008 4949 scope.go:117] "RemoveContainer" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: E0120 15:41:21.161328 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": container with ID starting with ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e not found: ID does not exist" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161351 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} err="failed to get container status \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": rpc error: code = NotFound desc = could not find container \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": container with ID starting with ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161368 4949 scope.go:117] "RemoveContainer" containerID="6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161794 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40"} err="failed to get container status \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": rpc error: code = NotFound desc = could not find container \"6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40\": container with ID starting with 6ef83f2fee084d4b8cb7f306efb54214777fb2da420fac9abb059b12ccfa0d40 not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.161840 4949 scope.go:117] "RemoveContainer" containerID="ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.162064 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e"} err="failed to get container status \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": rpc error: code = NotFound desc = could not find container \"ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e\": container with ID starting with ad9c037d764749c524cf7b3c9ee664b22dd7c30835b288c235f647e35ebdf34e not found: ID does not exist" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179822 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179872 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99nm\" (UniqueName: \"kubernetes.io/projected/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-kube-api-access-j99nm\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.179922 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-config-data\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-scripts\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180115 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180159 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180258 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.180275 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed776ab-5efa-46df-b070-54de4042b64e-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281584 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99nm\" (UniqueName: \"kubernetes.io/projected/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-kube-api-access-j99nm\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281690 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-config-data\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-scripts\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281776 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281842 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.281884 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.282197 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.283031 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.285415 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-scripts\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.286767 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-config-data\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.287361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.299323 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.302330 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99nm\" (UniqueName: \"kubernetes.io/projected/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-kube-api-access-j99nm\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.303476 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ddebe6-ef20-4de2-9eaa-690312bbbf0a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a\") " pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.389283 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.400438 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.413966 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.425246 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.427167 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.437679 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.441632 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613329 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613670 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rj5\" (UniqueName: \"kubernetes.io/projected/acbf90ca-14f6-4274-b63b-f4e71c1ce845-kube-api-access-t8rj5\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613740 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-scripts\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613780 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613920 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acbf90ca-14f6-4274-b63b-f4e71c1ce845-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.613976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acbf90ca-14f6-4274-b63b-f4e71c1ce845-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716184 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716218 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acbf90ca-14f6-4274-b63b-f4e71c1ce845-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716250 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716495 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8rj5\" (UniqueName: \"kubernetes.io/projected/acbf90ca-14f6-4274-b63b-f4e71c1ce845-kube-api-access-t8rj5\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716581 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-scripts\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.716615 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.721039 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.729120 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.729160 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-scripts\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.729621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbf90ca-14f6-4274-b63b-f4e71c1ce845-config-data\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.735226 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8rj5\" (UniqueName: \"kubernetes.io/projected/acbf90ca-14f6-4274-b63b-f4e71c1ce845-kube-api-access-t8rj5\") pod \"manila-scheduler-0\" (UID: \"acbf90ca-14f6-4274-b63b-f4e71c1ce845\") " pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.837876 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.863158 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 20 15:41:21 crc kubenswrapper[4949]: W0120 15:41:21.869687 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3ddebe6_ef20_4de2_9eaa_690312bbbf0a.slice/crio-47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289 WatchSource:0}: Error finding container 47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289: Status 404 returned error can't find the container with id 47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289 Jan 20 15:41:21 crc kubenswrapper[4949]: I0120 15:41:21.993515 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"47e3f82c7c11d1c39c74447f14ec406c8ff24d2b0613257eaf5f3cd2621f0289"} Jan 20 15:41:22 crc kubenswrapper[4949]: W0120 15:41:22.355756 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacbf90ca_14f6_4274_b63b_f4e71c1ce845.slice/crio-551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb WatchSource:0}: Error finding container 551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb: Status 404 returned error can't find the container with id 551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb Jan 20 15:41:22 crc kubenswrapper[4949]: I0120 15:41:22.361257 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 20 15:41:22 crc kubenswrapper[4949]: I0120 15:41:22.800095 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed776ab-5efa-46df-b070-54de4042b64e" path="/var/lib/kubelet/pods/8ed776ab-5efa-46df-b070-54de4042b64e/volumes" Jan 20 15:41:22 crc kubenswrapper[4949]: I0120 15:41:22.801534 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17688bb-6e3e-4b48-bffa-bf1383aa47a1" path="/var/lib/kubelet/pods/b17688bb-6e3e-4b48-bffa-bf1383aa47a1/volumes" Jan 20 15:41:23 crc kubenswrapper[4949]: I0120 15:41:23.039854 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"acbf90ca-14f6-4274-b63b-f4e71c1ce845","Type":"ContainerStarted","Data":"9d6e344563f46fcb47a17dca04e6553f906ca62203c3e44a70be2a6e915e2a43"} Jan 20 15:41:23 crc kubenswrapper[4949]: I0120 15:41:23.039915 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"acbf90ca-14f6-4274-b63b-f4e71c1ce845","Type":"ContainerStarted","Data":"551e32616baff5790371df9918526a7d72d356b601d93e3bc454b29fe9fe99bb"} Jan 20 15:41:23 crc kubenswrapper[4949]: I0120 15:41:23.067731 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"3fd48d6989ae3a58a944703ba4e5d06481069dec4db5548c5db9d8de833a165a"} Jan 20 15:41:24 crc kubenswrapper[4949]: I0120 15:41:24.080369 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"acbf90ca-14f6-4274-b63b-f4e71c1ce845","Type":"ContainerStarted","Data":"1fa86fee69bc75f97813c969dd9680c1d3fcbb4983ff73cdfde6219d94bfb4e9"} Jan 20 15:41:24 crc kubenswrapper[4949]: I0120 15:41:24.121000 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.12097679 podStartE2EDuration="3.12097679s" podCreationTimestamp="2026-01-20 15:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:24.104950124 +0000 UTC m=+3079.914781002" watchObservedRunningTime="2026-01-20 15:41:24.12097679 +0000 UTC m=+3079.930807648" Jan 20 15:41:25 crc kubenswrapper[4949]: I0120 15:41:25.788110 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 20 15:41:27 crc kubenswrapper[4949]: I0120 15:41:27.117150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"8ed49821affc6ebd664cee7c204a957d406ecac6f3be7536e5393efd5351dc20"} Jan 20 15:41:28 crc kubenswrapper[4949]: I0120 15:41:28.128730 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"9318a3539773c714bb7ddcdee14692d7a15c6cf4713f5d5fd44856adb68ca5ef"} Jan 20 15:41:29 crc kubenswrapper[4949]: I0120 15:41:29.728849 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 20 15:41:29 crc kubenswrapper[4949]: I0120 15:41:29.825006 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:30 crc kubenswrapper[4949]: I0120 15:41:30.155086 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" containerID="cri-o://a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795" gracePeriod=30 Jan 20 15:41:30 crc kubenswrapper[4949]: I0120 15:41:30.156358 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" containerID="cri-o://317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86" gracePeriod=30 Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.166474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ddebe6-ef20-4de2-9eaa-690312bbbf0a","Type":"ContainerStarted","Data":"f0563b5270f17a5dd68925da295f7260a8e6cdad02ab8e79f06c169bbdcf674b"} Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.167121 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169045 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerID="317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86" exitCode=0 Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169068 4949 generic.go:334] "Generic (PLEG): container finished" podID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerID="a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795" exitCode=1 Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169085 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerDied","Data":"317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86"} Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.169103 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerDied","Data":"a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795"} Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.207316 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9236957989999999 podStartE2EDuration="10.20726853s" podCreationTimestamp="2026-01-20 15:41:21 +0000 UTC" firstStartedPulling="2026-01-20 15:41:21.874098602 +0000 UTC m=+3077.683929460" lastFinishedPulling="2026-01-20 15:41:30.157671333 +0000 UTC m=+3085.967502191" observedRunningTime="2026-01-20 15:41:31.191034519 +0000 UTC m=+3087.000865377" watchObservedRunningTime="2026-01-20 15:41:31.20726853 +0000 UTC m=+3087.017099408" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.397815 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533099 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533538 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533586 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533779 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533804 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533854 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.533896 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") pod \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\" (UID: \"c4221b9c-f2d4-437c-9b6c-1b9341a74219\") " Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.534281 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.534370 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.539129 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts" (OuterVolumeSpecName: "scripts") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.539342 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.540213 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph" (OuterVolumeSpecName: "ceph") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.549880 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k" (OuterVolumeSpecName: "kube-api-access-mwm9k") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "kube-api-access-mwm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.589564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.627731 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data" (OuterVolumeSpecName: "config-data") pod "c4221b9c-f2d4-437c-9b6c-1b9341a74219" (UID: "c4221b9c-f2d4-437c-9b6c-1b9341a74219"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637340 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwm9k\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-kube-api-access-mwm9k\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637375 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c4221b9c-f2d4-437c-9b6c-1b9341a74219-ceph\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637390 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637399 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637407 4949 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c4221b9c-f2d4-437c-9b6c-1b9341a74219-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637415 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637424 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-config-data\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.637432 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4221b9c-f2d4-437c-9b6c-1b9341a74219-scripts\") on node \"crc\" DevicePath \"\"" Jan 20 15:41:31 crc kubenswrapper[4949]: I0120 15:41:31.838443 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.197644 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.201453 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c4221b9c-f2d4-437c-9b6c-1b9341a74219","Type":"ContainerDied","Data":"60e7d4a8ac8e4a2e8bf0ecf78ff8e8d81b95e8b85f820ad01959c6f7e2278fab"} Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.201495 4949 scope.go:117] "RemoveContainer" containerID="317eba973901cc17fd33a65878db6fe6c7889221c2c336abe1ad1042d1ec2f86" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.236265 4949 scope.go:117] "RemoveContainer" containerID="a3930e26c606aef85f011bf5f4aef8a3539cb119b20dfd5c410749e48b44c795" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.253864 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.265267 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.277751 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: E0120 15:41:32.278299 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.278323 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" Jan 20 15:41:32 crc kubenswrapper[4949]: E0120 15:41:32.278359 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.278368 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.283044 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="probe" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.283114 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" containerName="manila-share" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.285230 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.287350 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.291196 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377347 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8xn\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-kube-api-access-dz8xn\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377446 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377506 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-ceph\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377539 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377587 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377754 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-scripts\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.377779 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480041 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480119 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-scripts\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz8xn\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-kube-api-access-dz8xn\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480296 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480327 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-ceph\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480378 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.480500 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.484821 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.484900 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-ceph\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.485326 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.485569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-config-data\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.493614 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-scripts\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.497628 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz8xn\" (UniqueName: \"kubernetes.io/projected/84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3-kube-api-access-dz8xn\") pod \"manila-share-share1-0\" (UID: \"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3\") " pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.622237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.788718 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:32 crc kubenswrapper[4949]: E0120 15:41:32.789274 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:32 crc kubenswrapper[4949]: I0120 15:41:32.799664 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4221b9c-f2d4-437c-9b6c-1b9341a74219" path="/var/lib/kubelet/pods/c4221b9c-f2d4-437c-9b6c-1b9341a74219/volumes" Jan 20 15:41:33 crc kubenswrapper[4949]: I0120 15:41:33.144991 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 20 15:41:33 crc kubenswrapper[4949]: I0120 15:41:33.205908 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3","Type":"ContainerStarted","Data":"66e30ce5a65196e0578a77c11f58ba35eb1fdea646668071e499e612a1115c49"} Jan 20 15:41:34 crc kubenswrapper[4949]: I0120 15:41:34.218213 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3","Type":"ContainerStarted","Data":"8202b7507d3576386b01c6b4b86ee50037ba30a968180c26c9606bf5a1d14731"} Jan 20 15:41:34 crc kubenswrapper[4949]: I0120 15:41:34.218500 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3","Type":"ContainerStarted","Data":"cd2a419f428ffcb48607715118b8db44746180ba3074dd046ff563c69163191a"} Jan 20 15:41:34 crc kubenswrapper[4949]: I0120 15:41:34.241693 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.241664228 podStartE2EDuration="2.241664228s" podCreationTimestamp="2026-01-20 15:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:41:34.240057307 +0000 UTC m=+3090.049888175" watchObservedRunningTime="2026-01-20 15:41:34.241664228 +0000 UTC m=+3090.051495086" Jan 20 15:41:42 crc kubenswrapper[4949]: I0120 15:41:42.623236 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 20 15:41:43 crc kubenswrapper[4949]: I0120 15:41:43.301400 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 20 15:41:43 crc kubenswrapper[4949]: I0120 15:41:43.788919 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:43 crc kubenswrapper[4949]: E0120 15:41:43.789167 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:41:51 crc kubenswrapper[4949]: I0120 15:41:51.398500 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 20 15:41:54 crc kubenswrapper[4949]: I0120 15:41:54.146250 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 20 15:41:56 crc kubenswrapper[4949]: I0120 15:41:56.789211 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:41:56 crc kubenswrapper[4949]: E0120 15:41:56.789969 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:10 crc kubenswrapper[4949]: I0120 15:42:10.789993 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:10 crc kubenswrapper[4949]: E0120 15:42:10.791005 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:24 crc kubenswrapper[4949]: I0120 15:42:24.796877 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:24 crc kubenswrapper[4949]: E0120 15:42:24.797872 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:36 crc kubenswrapper[4949]: I0120 15:42:36.789202 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:36 crc kubenswrapper[4949]: E0120 15:42:36.790061 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:42:50 crc kubenswrapper[4949]: I0120 15:42:50.789726 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:42:50 crc kubenswrapper[4949]: E0120 15:42:50.790301 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:43:02 crc kubenswrapper[4949]: I0120 15:43:02.789325 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:43:02 crc kubenswrapper[4949]: E0120 15:43:02.790555 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:43:14 crc kubenswrapper[4949]: I0120 15:43:14.795025 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:43:14 crc kubenswrapper[4949]: E0120 15:43:14.795863 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:43:29 crc kubenswrapper[4949]: I0120 15:43:29.790732 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:43:30 crc kubenswrapper[4949]: I0120 15:43:30.447085 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1"} Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.022829 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.025619 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.059751 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.180653 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.181110 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.181312 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.283231 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.283316 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.283355 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.284009 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.284014 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.334758 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"redhat-marketplace-qdm6q\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.375329 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:44:51 crc kubenswrapper[4949]: I0120 15:44:51.904916 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.232418 4949 generic.go:334] "Generic (PLEG): container finished" podID="a858c71e-19cb-4464-91f5-366a6695586c" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" exitCode=0 Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.232471 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73"} Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.232773 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerStarted","Data":"e9cde6f8f0b1fe32ffe869b0de4686fd3fb41b702a8f3a6a856dc7ae163544cc"} Jan 20 15:44:52 crc kubenswrapper[4949]: I0120 15:44:52.235733 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:44:53 crc kubenswrapper[4949]: I0120 15:44:53.248776 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerStarted","Data":"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3"} Jan 20 15:44:54 crc kubenswrapper[4949]: I0120 15:44:54.259152 4949 generic.go:334] "Generic (PLEG): container finished" podID="a858c71e-19cb-4464-91f5-366a6695586c" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" exitCode=0 Jan 20 15:44:54 crc kubenswrapper[4949]: I0120 15:44:54.259212 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3"} Jan 20 15:44:56 crc kubenswrapper[4949]: I0120 15:44:56.284460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerStarted","Data":"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6"} Jan 20 15:44:56 crc kubenswrapper[4949]: I0120 15:44:56.309912 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdm6q" podStartSLOduration=2.452525749 podStartE2EDuration="5.309896654s" podCreationTimestamp="2026-01-20 15:44:51 +0000 UTC" firstStartedPulling="2026-01-20 15:44:52.235511303 +0000 UTC m=+3288.045342161" lastFinishedPulling="2026-01-20 15:44:55.092882218 +0000 UTC m=+3290.902713066" observedRunningTime="2026-01-20 15:44:56.305672471 +0000 UTC m=+3292.115503329" watchObservedRunningTime="2026-01-20 15:44:56.309896654 +0000 UTC m=+3292.119727512" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.163431 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f"] Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.165373 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.167362 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.167444 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.174173 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f"] Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.274804 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.275194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.275273 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.376471 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.376644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.376665 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.377634 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.384599 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.397119 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"collect-profiles-29482065-j9v6f\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:00 crc kubenswrapper[4949]: I0120 15:45:00.540102 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.018078 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f"] Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.333763 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerStarted","Data":"f99dccecfed096173d8e99ec6d4c12bef0a1b039b8d9ebb5544f5d3425076ef4"} Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.334102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerStarted","Data":"9c93255f0127d69f3aa827e15d1137974f98dc4d2e41e90fa555d2a6d7823453"} Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.351696 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" podStartSLOduration=1.351674643 podStartE2EDuration="1.351674643s" podCreationTimestamp="2026-01-20 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:45:01.349473544 +0000 UTC m=+3297.159304402" watchObservedRunningTime="2026-01-20 15:45:01.351674643 +0000 UTC m=+3297.161505501" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.375698 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.375750 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:01 crc kubenswrapper[4949]: I0120 15:45:01.447748 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.344317 4949 generic.go:334] "Generic (PLEG): container finished" podID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerID="f99dccecfed096173d8e99ec6d4c12bef0a1b039b8d9ebb5544f5d3425076ef4" exitCode=0 Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.345906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerDied","Data":"f99dccecfed096173d8e99ec6d4c12bef0a1b039b8d9ebb5544f5d3425076ef4"} Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.407549 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:02 crc kubenswrapper[4949]: I0120 15:45:02.481885 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.727573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.856032 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") pod \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.856258 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") pod \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.856296 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") pod \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\" (UID: \"4d0ce17c-1d57-4af7-b417-1ab6838117c8\") " Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.857026 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d0ce17c-1d57-4af7-b417-1ab6838117c8" (UID: "4d0ce17c-1d57-4af7-b417-1ab6838117c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.857405 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d0ce17c-1d57-4af7-b417-1ab6838117c8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.862699 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d0ce17c-1d57-4af7-b417-1ab6838117c8" (UID: "4d0ce17c-1d57-4af7-b417-1ab6838117c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.863596 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm" (OuterVolumeSpecName: "kube-api-access-4hjpm") pod "4d0ce17c-1d57-4af7-b417-1ab6838117c8" (UID: "4d0ce17c-1d57-4af7-b417-1ab6838117c8"). InnerVolumeSpecName "kube-api-access-4hjpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.959780 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d0ce17c-1d57-4af7-b417-1ab6838117c8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:03 crc kubenswrapper[4949]: I0120 15:45:03.959821 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjpm\" (UniqueName: \"kubernetes.io/projected/4d0ce17c-1d57-4af7-b417-1ab6838117c8-kube-api-access-4hjpm\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.365731 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" event={"ID":"4d0ce17c-1d57-4af7-b417-1ab6838117c8","Type":"ContainerDied","Data":"9c93255f0127d69f3aa827e15d1137974f98dc4d2e41e90fa555d2a6d7823453"} Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.366019 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c93255f0127d69f3aa827e15d1137974f98dc4d2e41e90fa555d2a6d7823453" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.365859 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdm6q" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" containerID="cri-o://67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" gracePeriod=2 Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.365790 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482065-j9v6f" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.426639 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.435826 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482020-7x2fc"] Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.800534 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591138ca-7bcb-4584-8089-82e6223d1457" path="/var/lib/kubelet/pods/591138ca-7bcb-4584-8089-82e6223d1457/volumes" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.867062 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.980755 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") pod \"a858c71e-19cb-4464-91f5-366a6695586c\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.981594 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") pod \"a858c71e-19cb-4464-91f5-366a6695586c\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.981730 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") pod \"a858c71e-19cb-4464-91f5-366a6695586c\" (UID: \"a858c71e-19cb-4464-91f5-366a6695586c\") " Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.981793 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities" (OuterVolumeSpecName: "utilities") pod "a858c71e-19cb-4464-91f5-366a6695586c" (UID: "a858c71e-19cb-4464-91f5-366a6695586c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.982369 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:04 crc kubenswrapper[4949]: I0120 15:45:04.986988 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw" (OuterVolumeSpecName: "kube-api-access-fq5lw") pod "a858c71e-19cb-4464-91f5-366a6695586c" (UID: "a858c71e-19cb-4464-91f5-366a6695586c"). InnerVolumeSpecName "kube-api-access-fq5lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.000142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a858c71e-19cb-4464-91f5-366a6695586c" (UID: "a858c71e-19cb-4464-91f5-366a6695586c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.084377 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a858c71e-19cb-4464-91f5-366a6695586c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.084415 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/a858c71e-19cb-4464-91f5-366a6695586c-kube-api-access-fq5lw\") on node \"crc\" DevicePath \"\"" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379391 4949 generic.go:334] "Generic (PLEG): container finished" podID="a858c71e-19cb-4464-91f5-366a6695586c" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" exitCode=0 Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379502 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdm6q" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379568 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6"} Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdm6q" event={"ID":"a858c71e-19cb-4464-91f5-366a6695586c","Type":"ContainerDied","Data":"e9cde6f8f0b1fe32ffe869b0de4686fd3fb41b702a8f3a6a856dc7ae163544cc"} Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.379740 4949 scope.go:117] "RemoveContainer" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.417915 4949 scope.go:117] "RemoveContainer" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.446680 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.448196 4949 scope.go:117] "RemoveContainer" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.458980 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdm6q"] Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.499566 4949 scope.go:117] "RemoveContainer" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" Jan 20 15:45:05 crc kubenswrapper[4949]: E0120 15:45:05.499995 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6\": container with ID starting with 67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6 not found: ID does not exist" containerID="67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500039 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6"} err="failed to get container status \"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6\": rpc error: code = NotFound desc = could not find container \"67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6\": container with ID starting with 67e263fe50b4e2d78a403e9f797711a294e9165bb9813a1542cf0817966250a6 not found: ID does not exist" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500068 4949 scope.go:117] "RemoveContainer" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" Jan 20 15:45:05 crc kubenswrapper[4949]: E0120 15:45:05.500373 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3\": container with ID starting with abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3 not found: ID does not exist" containerID="abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500404 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3"} err="failed to get container status \"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3\": rpc error: code = NotFound desc = could not find container \"abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3\": container with ID starting with abf1e288a31155ac378625811fc2a84cd7a91e23d89ab047aedfe51d900342f3 not found: ID does not exist" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.500454 4949 scope.go:117] "RemoveContainer" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" Jan 20 15:45:05 crc kubenswrapper[4949]: E0120 15:45:05.501089 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73\": container with ID starting with b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73 not found: ID does not exist" containerID="b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73" Jan 20 15:45:05 crc kubenswrapper[4949]: I0120 15:45:05.501141 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73"} err="failed to get container status \"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73\": rpc error: code = NotFound desc = could not find container \"b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73\": container with ID starting with b9754c7b527049dfdd5064554fe6190a558da16a41a802e2b36666ba9d8c3f73 not found: ID does not exist" Jan 20 15:45:06 crc kubenswrapper[4949]: I0120 15:45:06.805826 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a858c71e-19cb-4464-91f5-366a6695586c" path="/var/lib/kubelet/pods/a858c71e-19cb-4464-91f5-366a6695586c/volumes" Jan 20 15:45:18 crc kubenswrapper[4949]: I0120 15:45:18.883842 4949 scope.go:117] "RemoveContainer" containerID="4ff5f836d3d163418d95ceb0986956f845ac79923a1ad3950a5ae54e3538d3fc" Jan 20 15:45:57 crc kubenswrapper[4949]: I0120 15:45:57.152375 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:45:57 crc kubenswrapper[4949]: I0120 15:45:57.153334 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.655771 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656726 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-content" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656740 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-content" Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656759 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656767 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656783 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-utilities" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656788 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="extract-utilities" Jan 20 15:46:04 crc kubenswrapper[4949]: E0120 15:46:04.656798 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerName="collect-profiles" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656804 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerName="collect-profiles" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656957 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0ce17c-1d57-4af7-b417-1ab6838117c8" containerName="collect-profiles" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.656969 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a858c71e-19cb-4464-91f5-366a6695586c" containerName="registry-server" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.658604 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.668556 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.690688 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.690726 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.690773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793002 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793342 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793395 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.793479 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.794074 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.815153 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"community-operators-tjvnw\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:04 crc kubenswrapper[4949]: I0120 15:46:04.993452 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:05 crc kubenswrapper[4949]: I0120 15:46:05.516154 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:05 crc kubenswrapper[4949]: W0120 15:46:05.519843 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a79a8b8_cc72_4615_afc6_1710a61d29e6.slice/crio-4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c WatchSource:0}: Error finding container 4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c: Status 404 returned error can't find the container with id 4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c Jan 20 15:46:06 crc kubenswrapper[4949]: I0120 15:46:06.099947 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" exitCode=0 Jan 20 15:46:06 crc kubenswrapper[4949]: I0120 15:46:06.100106 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e"} Jan 20 15:46:06 crc kubenswrapper[4949]: I0120 15:46:06.100425 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerStarted","Data":"4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c"} Jan 20 15:46:07 crc kubenswrapper[4949]: I0120 15:46:07.109909 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerStarted","Data":"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3"} Jan 20 15:46:08 crc kubenswrapper[4949]: I0120 15:46:08.137842 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" exitCode=0 Jan 20 15:46:08 crc kubenswrapper[4949]: I0120 15:46:08.137923 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3"} Jan 20 15:46:09 crc kubenswrapper[4949]: I0120 15:46:09.149918 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerStarted","Data":"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c"} Jan 20 15:46:09 crc kubenswrapper[4949]: I0120 15:46:09.182918 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tjvnw" podStartSLOduration=2.546728817 podStartE2EDuration="5.18290262s" podCreationTimestamp="2026-01-20 15:46:04 +0000 UTC" firstStartedPulling="2026-01-20 15:46:06.104144948 +0000 UTC m=+3361.913975816" lastFinishedPulling="2026-01-20 15:46:08.740318721 +0000 UTC m=+3364.550149619" observedRunningTime="2026-01-20 15:46:09.177059366 +0000 UTC m=+3364.986890254" watchObservedRunningTime="2026-01-20 15:46:09.18290262 +0000 UTC m=+3364.992733478" Jan 20 15:46:14 crc kubenswrapper[4949]: I0120 15:46:14.993607 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:14 crc kubenswrapper[4949]: I0120 15:46:14.994152 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:15 crc kubenswrapper[4949]: I0120 15:46:15.047913 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:15 crc kubenswrapper[4949]: I0120 15:46:15.249803 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:15 crc kubenswrapper[4949]: I0120 15:46:15.294551 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.226773 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tjvnw" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" containerID="cri-o://8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" gracePeriod=2 Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.713617 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.760893 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") pod \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.761026 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") pod \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.761081 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") pod \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\" (UID: \"3a79a8b8-cc72-4615-afc6-1710a61d29e6\") " Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.761639 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities" (OuterVolumeSpecName: "utilities") pod "3a79a8b8-cc72-4615-afc6-1710a61d29e6" (UID: "3a79a8b8-cc72-4615-afc6-1710a61d29e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.770814 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd" (OuterVolumeSpecName: "kube-api-access-fvxrd") pod "3a79a8b8-cc72-4615-afc6-1710a61d29e6" (UID: "3a79a8b8-cc72-4615-afc6-1710a61d29e6"). InnerVolumeSpecName "kube-api-access-fvxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.812246 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a79a8b8-cc72-4615-afc6-1710a61d29e6" (UID: "3a79a8b8-cc72-4615-afc6-1710a61d29e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.864045 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.864080 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxrd\" (UniqueName: \"kubernetes.io/projected/3a79a8b8-cc72-4615-afc6-1710a61d29e6-kube-api-access-fvxrd\") on node \"crc\" DevicePath \"\"" Jan 20 15:46:17 crc kubenswrapper[4949]: I0120 15:46:17.864091 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a79a8b8-cc72-4615-afc6-1710a61d29e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.238979 4949 generic.go:334] "Generic (PLEG): container finished" podID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" exitCode=0 Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239036 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c"} Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239381 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjvnw" event={"ID":"3a79a8b8-cc72-4615-afc6-1710a61d29e6","Type":"ContainerDied","Data":"4ab22d5212332cf448f90632e15e0255416c335de4c424d7162c30f1ef47d31c"} Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239414 4949 scope.go:117] "RemoveContainer" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.239045 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjvnw" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.263309 4949 scope.go:117] "RemoveContainer" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.296983 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.298104 4949 scope.go:117] "RemoveContainer" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.313854 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tjvnw"] Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358119 4949 scope.go:117] "RemoveContainer" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" Jan 20 15:46:18 crc kubenswrapper[4949]: E0120 15:46:18.358480 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c\": container with ID starting with 8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c not found: ID does not exist" containerID="8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358534 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c"} err="failed to get container status \"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c\": rpc error: code = NotFound desc = could not find container \"8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c\": container with ID starting with 8e7723791ed83ee5e3f08cc75b123b7f48c7a40d8a71c4b6c18364b924c34c9c not found: ID does not exist" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358562 4949 scope.go:117] "RemoveContainer" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" Jan 20 15:46:18 crc kubenswrapper[4949]: E0120 15:46:18.358828 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3\": container with ID starting with 45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3 not found: ID does not exist" containerID="45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358857 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3"} err="failed to get container status \"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3\": rpc error: code = NotFound desc = could not find container \"45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3\": container with ID starting with 45a457386f040c73dfda373ac65461013caa5b9f29222d62c6b9ccf6502dabc3 not found: ID does not exist" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.358876 4949 scope.go:117] "RemoveContainer" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" Jan 20 15:46:18 crc kubenswrapper[4949]: E0120 15:46:18.359287 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e\": container with ID starting with e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e not found: ID does not exist" containerID="e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.359318 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e"} err="failed to get container status \"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e\": rpc error: code = NotFound desc = could not find container \"e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e\": container with ID starting with e6ce9eec0ee746969c7f97486987eec6ea5930617bac70048b34448457bd907e not found: ID does not exist" Jan 20 15:46:18 crc kubenswrapper[4949]: I0120 15:46:18.804996 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" path="/var/lib/kubelet/pods/3a79a8b8-cc72-4615-afc6-1710a61d29e6/volumes" Jan 20 15:46:27 crc kubenswrapper[4949]: I0120 15:46:27.151895 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:46:27 crc kubenswrapper[4949]: I0120 15:46:27.152375 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.152888 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.153697 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.153770 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.155115 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.155225 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1" gracePeriod=600 Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707689 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1" exitCode=0 Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707806 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1"} Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707941 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf"} Jan 20 15:46:57 crc kubenswrapper[4949]: I0120 15:46:57.707965 4949 scope.go:117] "RemoveContainer" containerID="16f6e513564e4725d532729ef2b7202b595660a06b20f60c51788bc2e89d5e17" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.749828 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:47:11 crc kubenswrapper[4949]: E0120 15:47:11.750821 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-utilities" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.750837 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-utilities" Jan 20 15:47:11 crc kubenswrapper[4949]: E0120 15:47:11.750858 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.750866 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" Jan 20 15:47:11 crc kubenswrapper[4949]: E0120 15:47:11.750878 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-content" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.750886 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="extract-content" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.751126 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a79a8b8-cc72-4615-afc6-1710a61d29e6" containerName="registry-server" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.752817 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.766047 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kk8nn"/"default-dockercfg-ldpvs" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.766082 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kk8nn"/"openshift-service-ca.crt" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.766178 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kk8nn"/"kube-root-ca.crt" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.770187 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.937873 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:11 crc kubenswrapper[4949]: I0120 15:47:11.938281 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.039991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.040123 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.040812 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.057922 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"must-gather-ccspq\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.087572 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.636906 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:47:12 crc kubenswrapper[4949]: I0120 15:47:12.868915 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerStarted","Data":"8d775bd2daca3a9966099b7b0b03300db83c7b661bbfa1ed54825c53ac39aac9"} Jan 20 15:47:20 crc kubenswrapper[4949]: I0120 15:47:20.987071 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerStarted","Data":"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f"} Jan 20 15:47:20 crc kubenswrapper[4949]: I0120 15:47:20.987756 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerStarted","Data":"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f"} Jan 20 15:47:21 crc kubenswrapper[4949]: I0120 15:47:21.022494 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kk8nn/must-gather-ccspq" podStartSLOduration=2.782368158 podStartE2EDuration="10.022463041s" podCreationTimestamp="2026-01-20 15:47:11 +0000 UTC" firstStartedPulling="2026-01-20 15:47:12.641860598 +0000 UTC m=+3428.451691456" lastFinishedPulling="2026-01-20 15:47:19.881955481 +0000 UTC m=+3435.691786339" observedRunningTime="2026-01-20 15:47:21.010617997 +0000 UTC m=+3436.820448915" watchObservedRunningTime="2026-01-20 15:47:21.022463041 +0000 UTC m=+3436.832293939" Jan 20 15:47:23 crc kubenswrapper[4949]: E0120 15:47:23.357826 4949 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:52012->38.102.83.41:36705: write tcp 38.102.83.41:52012->38.102.83.41:36705: write: broken pipe Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.924612 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-j9l58"] Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.926004 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.962817 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:23 crc kubenswrapper[4949]: I0120 15:47:23.963006 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.064835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.064986 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.064991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.088467 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"crc-debug-j9l58\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:24 crc kubenswrapper[4949]: I0120 15:47:24.243176 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:25 crc kubenswrapper[4949]: I0120 15:47:25.029462 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" event={"ID":"a01ac885-313b-4cac-ad73-abd4dd2c9f97","Type":"ContainerStarted","Data":"6d89f9e66aa529d3f5a75012a12ed5c4b38240802264c61527ea25cdb6ad0dad"} Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.721333 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-ffcb5df54-fhbnh_25689957-1a77-40ab-8a4c-1e40a1524bac/barbican-api-log/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.731767 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-ffcb5df54-fhbnh_25689957-1a77-40ab-8a4c-1e40a1524bac/barbican-api/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.770620 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bfc57b96-w7nhj_02b718a3-85a6-4bb6-9e17-9ff6936cb5c4/barbican-keystone-listener-log/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.778071 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bfc57b96-w7nhj_02b718a3-85a6-4bb6-9e17-9ff6936cb5c4/barbican-keystone-listener/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.822908 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84d486fc9-sgwzr_0f7e061d-75da-4fc4-80c8-1163e314ebb5/barbican-worker-log/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.837731 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-84d486fc9-sgwzr_0f7e061d-75da-4fc4-80c8-1163e314ebb5/barbican-worker/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.892970 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2pnnh_da7cee45-2ef4-4ebc-8067-08dbe10af76a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.917684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/ceilometer-central-agent/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.938662 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/ceilometer-notification-agent/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.943841 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/sg-core/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.953060 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b3ddebe6-ef20-4de2-9eaa-690312bbbf0a/proxy-httpd/0.log" Jan 20 15:47:26 crc kubenswrapper[4949]: I0120 15:47:26.969708 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-tp625_70d9d029-15fb-479a-b668-926d3167b179/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.001633 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hzspv_9f5697b2-a2f0-4b5c-949a-0f52e9e39beb/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.015635 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_605e8425-f80d-4cd4-981d-afb431ec676f/cinder-api-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.056205 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_605e8425-f80d-4cd4-981d-afb431ec676f/cinder-api/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.272859 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f7354f89-1113-43f0-b654-a4222ee05faf/cinder-backup/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.287923 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f7354f89-1113-43f0-b654-a4222ee05faf/probe/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.316670 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef233e09-2d4d-4f12-9adf-e1bab1dcd101/cinder-scheduler/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.338270 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ef233e09-2d4d-4f12-9adf-e1bab1dcd101/probe/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.398123 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_83382677-6882-49eb-a111-498346e2d6dc/cinder-volume/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.416511 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_83382677-6882-49eb-a111-498346e2d6dc/probe/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.430270 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-x77fv_6951e28c-3b02-44dd-9823-d0e4d1a779d5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.450822 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pscmc_aa357e67-831a-4584-bf56-0c2e58d1aed8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.466875 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hf624_d723357a-5423-49c3-9263-ff768f28745f/dnsmasq-dns/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.476128 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-hf624_d723357a-5423-49c3-9263-ff768f28745f/init/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.493966 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2e7cb37b-debf-462c-8a81-81ce79da0ee9/glance-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.506592 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2e7cb37b-debf-462c-8a81-81ce79da0ee9/glance-httpd/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.515659 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_80681a49-f9f1-4208-a90e-77c74cc6860d/glance-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.531496 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_80681a49-f9f1-4208-a90e-77c74cc6860d/glance-httpd/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.892460 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66d45cfc44-ltr94_08182d24-cea6-4daa-9dbb-efcb48b76434/horizon-log/0.log" Jan 20 15:47:27 crc kubenswrapper[4949]: I0120 15:47:27.992324 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66d45cfc44-ltr94_08182d24-cea6-4daa-9dbb-efcb48b76434/horizon/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.014943 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2bcrb_d1ff69ad-f42e-4882-a580-c2fc212ab3a4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.043714 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fqp9f_a8ca811b-8738-49ed-b552-bdf38a5d5650/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.137420 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b69c674cf-wdfrq_7dd53c2b-505a-4783-9e2a-34857e6158ea/keystone-api/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.151390 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4a8d0e18-297d-407d-8c7c-64555052b960/kube-state-metrics/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.195351 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-9xbns_ccd4282a-7ba2-4eda-9078-00d3f0ff58c4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.203281 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-176a-account-create-update-gqg2s_92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d/mariadb-account-create-update/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.218026 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9d247f3c-18c5-4045-a6a5-e25dc78c33ee/manila-api-log/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.335244 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9d247f3c-18c5-4045-a6a5-e25dc78c33ee/manila-api/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.344257 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-create-6d468_c1f501b4-e612-41a4-aef2-fdaf166aa018/mariadb-database-create/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.361212 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-sync-q7rxq_1501061b-c734-43b8-8f88-0d895789e209/manila-db-sync/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.454414 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_acbf90ca-14f6-4274-b63b-f4e71c1ce845/manila-scheduler/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.462027 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_acbf90ca-14f6-4274-b63b-f4e71c1ce845/probe/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.507018 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3/manila-share/0.log" Jan 20 15:47:28 crc kubenswrapper[4949]: I0120 15:47:28.512141 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_84f5390b-e0ae-4386-a5f6-65a2d2a0d1b3/probe/0.log" Jan 20 15:47:38 crc kubenswrapper[4949]: I0120 15:47:38.149065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" event={"ID":"a01ac885-313b-4cac-ad73-abd4dd2c9f97","Type":"ContainerStarted","Data":"d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4"} Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.855442 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_485725f6-91f1-413b-89f5-21bde785bd94/memcached/0.log" Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.896685 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b8cd78967-6cmpj_dae84f47-70ef-4a10-ae62-dae601b0de81/neutron-api/0.log" Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.916232 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b8cd78967-6cmpj_dae84f47-70ef-4a10-ae62-dae601b0de81/neutron-httpd/0.log" Jan 20 15:47:39 crc kubenswrapper[4949]: I0120 15:47:39.944270 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4wnf_a6c12b14-7d12-46ea-be9c-15789d700112/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.108875 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0174a61d-76ab-4198-91f1-d97291db561b/nova-api-log/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.380791 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0174a61d-76ab-4198-91f1-d97291db561b/nova-api-api/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.469308 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_432760ec-2ef6-4335-a7ba-21a2d73ede73/nova-cell0-conductor-conductor/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.574291 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e19f25ae-0920-4573-9f2e-6447ca83e76c/nova-cell1-conductor-conductor/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.663387 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_16e90cac-28e0-4d75-a613-d77c9263f634/nova-cell1-novncproxy-novncproxy/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.715134 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-z9vff_97b58b41-5a8f-47f7-af93-382d7a6f0e69/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:40 crc kubenswrapper[4949]: I0120 15:47:40.778475 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4185f7d0-b70a-4d49-82b9-e249bd1b2c48/nova-metadata-log/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.572307 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4185f7d0-b70a-4d49-82b9-e249bd1b2c48/nova-metadata-metadata/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.664996 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_51e2ed93-379c-457d-992a-57160c6be51a/nova-scheduler-scheduler/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.697889 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f03e93a7-24b6-499c-89bc-1bf3e67221a6/galera/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.708860 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f03e93a7-24b6-499c-89bc-1bf3e67221a6/mysql-bootstrap/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.739530 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee020527-9591-42dc-b000-3153caede9cf/galera/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.749813 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee020527-9591-42dc-b000-3153caede9cf/mysql-bootstrap/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.758258 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0b4f97ab-7425-4271-bd09-0e89073ebdc1/openstackclient/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.770381 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q26vt_f4968375-00d3-4db1-93b4-db0808c464b2/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.783903 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nqhh2_c4179fca-4378-4347-a519-96120d9ae1cc/ovn-controller/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.799572 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kbnxn_bce99786-819a-47cc-8ad7-0c5581f034fa/ovsdb-server/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.808887 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kbnxn_bce99786-819a-47cc-8ad7-0c5581f034fa/ovs-vswitchd/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.816445 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kbnxn_bce99786-819a-47cc-8ad7-0c5581f034fa/ovsdb-server-init/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.847256 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7j58g_eb1d8e10-2c84-4a8f-a3d0-653432297fb1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.857811 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_425d9be8-fa72-4cbe-bcc7-444e46e67a08/ovn-northd/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.865054 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_425d9be8-fa72-4cbe-bcc7-444e46e67a08/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.881702 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab38c923-ec3b-400d-864a-c5e8a0d53999/ovsdbserver-nb/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.888318 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab38c923-ec3b-400d-864a-c5e8a0d53999/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.910463 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_17c9cb64-1ff5-4087-b424-1c2bb7398ba0/ovsdbserver-sb/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.917297 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_17c9cb64-1ff5-4087-b424-1c2bb7398ba0/openstack-network-exporter/0.log" Jan 20 15:47:41 crc kubenswrapper[4949]: I0120 15:47:41.975546 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-754d6d4c8d-v7txj_69138579-1fa8-4d89-b94f-46e3424d604c/placement-log/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.010377 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-754d6d4c8d-v7txj_69138579-1fa8-4d89-b94f-46e3424d604c/placement-api/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.030992 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_81813586-eebe-4c95-ad8b-433b8c501337/rabbitmq/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.037851 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_81813586-eebe-4c95-ad8b-433b8c501337/setup-container/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.106306 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_18d74874-b8f5-4706-abfe-c8d1cb7bb21b/rabbitmq/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.112618 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_18d74874-b8f5-4706-abfe-c8d1cb7bb21b/setup-container/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.131157 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-hkzh5_3b31ae29-db74-4104-b8b5-377bfa3f766a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.142506 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-ptpnk_f5d6330b-b87a-476b-bebc-a790026e5dd3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.153781 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bdp7d_4d06892f-967c-4bd9-ac54-c36c80e3df73/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.174498 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6z8gd_53b63ff2-c70c-4429-99c6-759d0eb33ae9/ssh-known-hosts-edpm-deployment/0.log" Jan 20 15:47:42 crc kubenswrapper[4949]: I0120 15:47:42.191491 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-j5ls7_cb58fe7e-6a7d-46ea-82ad-02e9200e8042/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 20 15:47:48 crc kubenswrapper[4949]: I0120 15:47:48.014903 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/controller/0.log" Jan 20 15:47:48 crc kubenswrapper[4949]: I0120 15:47:48.023880 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/kube-rbac-proxy/0.log" Jan 20 15:47:48 crc kubenswrapper[4949]: I0120 15:47:48.048479 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/controller/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.651449 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.667847 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/reloader/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.674319 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr-metrics/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.683014 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.694284 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy-frr/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.702255 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-frr-files/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.712555 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-reloader/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.717560 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-metrics/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.729047 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-87tfc_9787b339-5a35-4568-8ea4-12b8904efd8a/frr-k8s-webhook-server/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.747365 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7949cdb884-qwqpl_aab28d03-013d-4f55-8f5d-4452aa51ae0b/manager/0.log" Jan 20 15:47:49 crc kubenswrapper[4949]: I0120 15:47:49.757597 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-598fc6787c-lklkm_418359eb-1dea-4f02-9964-9ab810e3bc09/webhook-server/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.037253 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/speaker/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.041914 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/kube-rbac-proxy/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.420924 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-jzl6b_070f7ba5-a528-4316-8484-4ea82fb70a40/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.472972 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-vll8p_c44d3483-738b-4aab-a4a2-1478480b6330/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.488865 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/extract/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.498211 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/util/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.506741 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/pull/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.524695 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-vhsdx_070a47eb-d68f-4208-86eb-a99f0a9ce5df/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.578364 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-m9grk_5eae4c51-3e86-4153-8c26-d4c51b2f1331/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.592416 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-jxnlk_e60d05a5-d1d5-4959-843b-654aaf547bca/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.622416 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5vwt4_05642ba7-89bd-4d72-a31b-4e6d4532923e/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.868759 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-q5h89_c07420af-b163-4ab6-8a1c-5e697629cab0/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.879137 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-bt9wn_57182814-f19c-4247-b774-5b01afe7d680/manager/0.log" Jan 20 15:47:50 crc kubenswrapper[4949]: I0120 15:47:50.969973 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-th6cb_d6706563-2c93-414e-bb49-cd74ae82d235/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.022690 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-ft9st_2dacfd0a-8e74-4eb1-b4cb-892ae16a9291/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.060906 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tj7jv_a87686a4-1af3-4d05-ac2d-15551c80e0d7/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.112489 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ljxrw_017942ba-9ec1-4474-91e5-7adb1481e807/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.192261 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-cc9zv_728be0e4-4dde-4f00-be4f-af6590d7025b/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.201065 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-g87xm_d02df557-c289-4444-b29b-917ea271a874/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.219995 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg_0e576db6-d246-4a03-a2bd-8cbd7f7526fd/manager/0.log" Jan 20 15:47:51 crc kubenswrapper[4949]: I0120 15:47:51.402147 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-647bfc4c5c-8vnrj_fa13f464-1245-4c7e-ba74-47e65076c9d1/operator/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.616023 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-559d8b8b56-srtdv_ec1b1a5b-0d86-40b4-9410-397d183776d0/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.624503 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nf5l6_a06c3c7b-913e-412e-833e-fcd7df154877/registry-server/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.681128 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-f52ph_ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.709870 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-4kwz9_58fdba15-e8ba-47fa-aca8-90f638577a6b/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.731387 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pzpkv_d770793b-0e56-43cc-9707-5d062b8f7c82/operator/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.751597 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-nr2lr_db4c21b1-de25-4c17-a3c3-e6eea4044d77/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.839721 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-94wzp_dc5c569e-c0ee-44bc-bdc9-397ab5941ad5/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.846652 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-869947677f-8qg9p_63acb80f-21b4-4255-af60-03a68dd07658/manager/0.log" Jan 20 15:47:52 crc kubenswrapper[4949]: I0120 15:47:52.860363 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-jc5mh_68de7d27-2202-473a-b077-d03d033244a2/manager/0.log" Jan 20 15:47:55 crc kubenswrapper[4949]: I0120 15:47:55.330822 4949 generic.go:334] "Generic (PLEG): container finished" podID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerID="d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4" exitCode=0 Jan 20 15:47:55 crc kubenswrapper[4949]: I0120 15:47:55.330980 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" event={"ID":"a01ac885-313b-4cac-ad73-abd4dd2c9f97","Type":"ContainerDied","Data":"d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4"} Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.452083 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.494470 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-j9l58"] Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.504334 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-j9l58"] Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.605600 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") pod \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.605836 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") pod \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\" (UID: \"a01ac885-313b-4cac-ad73-abd4dd2c9f97\") " Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.606646 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host" (OuterVolumeSpecName: "host") pod "a01ac885-313b-4cac-ad73-abd4dd2c9f97" (UID: "a01ac885-313b-4cac-ad73-abd4dd2c9f97"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.626670 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr" (OuterVolumeSpecName: "kube-api-access-ldwkr") pod "a01ac885-313b-4cac-ad73-abd4dd2c9f97" (UID: "a01ac885-313b-4cac-ad73-abd4dd2c9f97"). InnerVolumeSpecName "kube-api-access-ldwkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.707772 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldwkr\" (UniqueName: \"kubernetes.io/projected/a01ac885-313b-4cac-ad73-abd4dd2c9f97-kube-api-access-ldwkr\") on node \"crc\" DevicePath \"\"" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.707803 4949 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a01ac885-313b-4cac-ad73-abd4dd2c9f97-host\") on node \"crc\" DevicePath \"\"" Jan 20 15:47:56 crc kubenswrapper[4949]: I0120 15:47:56.800666 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" path="/var/lib/kubelet/pods/a01ac885-313b-4cac-ad73-abd4dd2c9f97/volumes" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.349798 4949 scope.go:117] "RemoveContainer" containerID="d816f326344f36733c35a25a41d16a2ad87b81846acd1c874e5ed61d01b8a3a4" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.350262 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-j9l58" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.709380 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-7wft6"] Jan 20 15:47:57 crc kubenswrapper[4949]: E0120 15:47:57.710809 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerName="container-00" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.710832 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerName="container-00" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.711021 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01ac885-313b-4cac-ad73-abd4dd2c9f97" containerName="container-00" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.711777 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.847153 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.847272 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.948952 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.949136 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.949450 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:57 crc kubenswrapper[4949]: I0120 15:47:57.980291 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"crc-debug-7wft6\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.030284 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:58 crc kubenswrapper[4949]: W0120 15:47:58.061919 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b8dab70_1888_4e69_a77f_47d2287883e9.slice/crio-ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174 WatchSource:0}: Error finding container ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174: Status 404 returned error can't find the container with id ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174 Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.215715 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d5t2m_95c38c39-62f0-4343-9628-5070d8cc10b7/control-plane-machine-set-operator/0.log" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.230914 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/kube-rbac-proxy/0.log" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.246764 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/machine-api-operator/0.log" Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.360060 4949 generic.go:334] "Generic (PLEG): container finished" podID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerID="154f9aff305cda87800e628ef62fda33f3caae55a607a147ebe105cec49f54f4" exitCode=1 Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.360122 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" event={"ID":"9b8dab70-1888-4e69-a77f-47d2287883e9","Type":"ContainerDied","Data":"154f9aff305cda87800e628ef62fda33f3caae55a607a147ebe105cec49f54f4"} Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.360312 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" event={"ID":"9b8dab70-1888-4e69-a77f-47d2287883e9","Type":"ContainerStarted","Data":"ecbcfcda465c52f8618d2513a288765175179d4189ff5ff54f6c029eca603174"} Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.395399 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-7wft6"] Jan 20 15:47:58 crc kubenswrapper[4949]: I0120 15:47:58.404026 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kk8nn/crc-debug-7wft6"] Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.476644 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.577239 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") pod \"9b8dab70-1888-4e69-a77f-47d2287883e9\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.577385 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host" (OuterVolumeSpecName: "host") pod "9b8dab70-1888-4e69-a77f-47d2287883e9" (UID: "9b8dab70-1888-4e69-a77f-47d2287883e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.577913 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") pod \"9b8dab70-1888-4e69-a77f-47d2287883e9\" (UID: \"9b8dab70-1888-4e69-a77f-47d2287883e9\") " Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.579089 4949 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b8dab70-1888-4e69-a77f-47d2287883e9-host\") on node \"crc\" DevicePath \"\"" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.592725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22" (OuterVolumeSpecName: "kube-api-access-wcb22") pod "9b8dab70-1888-4e69-a77f-47d2287883e9" (UID: "9b8dab70-1888-4e69-a77f-47d2287883e9"). InnerVolumeSpecName "kube-api-access-wcb22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:47:59 crc kubenswrapper[4949]: I0120 15:47:59.680553 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcb22\" (UniqueName: \"kubernetes.io/projected/9b8dab70-1888-4e69-a77f-47d2287883e9-kube-api-access-wcb22\") on node \"crc\" DevicePath \"\"" Jan 20 15:48:00 crc kubenswrapper[4949]: I0120 15:48:00.385153 4949 scope.go:117] "RemoveContainer" containerID="154f9aff305cda87800e628ef62fda33f3caae55a607a147ebe105cec49f54f4" Jan 20 15:48:00 crc kubenswrapper[4949]: I0120 15:48:00.385217 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/crc-debug-7wft6" Jan 20 15:48:00 crc kubenswrapper[4949]: I0120 15:48:00.800928 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" path="/var/lib/kubelet/pods/9b8dab70-1888-4e69-a77f-47d2287883e9/volumes" Jan 20 15:48:53 crc kubenswrapper[4949]: I0120 15:48:53.225257 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-k9xq5_1ca44809-a121-411d-8be6-f1a8b879b97f/cert-manager-controller/0.log" Jan 20 15:48:53 crc kubenswrapper[4949]: I0120 15:48:53.238811 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9x9js_9cd775b9-2d07-40bb-964c-6e935aa6775a/cert-manager-cainjector/0.log" Jan 20 15:48:53 crc kubenswrapper[4949]: I0120 15:48:53.250686 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wdg2b_512fc928-abb3-4353-9543-be5d35cd8ccd/cert-manager-webhook/0.log" Jan 20 15:48:57 crc kubenswrapper[4949]: I0120 15:48:57.152639 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:48:57 crc kubenswrapper[4949]: I0120 15:48:57.153257 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.009879 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-vt2ng_7a366383-883e-4f7e-b656-d23eb0fe6294/nmstate-console-plugin/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.030983 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ndwpd_248f6a09-0064-4d9f-a4d7-13a92b06ee72/nmstate-handler/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.042035 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/nmstate-metrics/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.049254 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/kube-rbac-proxy/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.064910 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jsrwb_b2bfb1bf-1717-4d51-9632-204856f869f4/nmstate-operator/0.log" Jan 20 15:48:59 crc kubenswrapper[4949]: I0120 15:48:59.074009 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-twsz5_71837cd3-c24a-4d86-b59f-28330f7d2809/nmstate-webhook/0.log" Jan 20 15:49:11 crc kubenswrapper[4949]: I0120 15:49:11.177506 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/controller/0.log" Jan 20 15:49:11 crc kubenswrapper[4949]: I0120 15:49:11.185684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/kube-rbac-proxy/0.log" Jan 20 15:49:11 crc kubenswrapper[4949]: I0120 15:49:11.210859 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/controller/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.434953 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.446000 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/reloader/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.450986 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr-metrics/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.464736 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.476231 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy-frr/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.486695 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-frr-files/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.501187 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-reloader/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.511458 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-metrics/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.525085 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-87tfc_9787b339-5a35-4568-8ea4-12b8904efd8a/frr-k8s-webhook-server/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.550962 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7949cdb884-qwqpl_aab28d03-013d-4f55-8f5d-4452aa51ae0b/manager/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.561666 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-598fc6787c-lklkm_418359eb-1dea-4f02-9964-9ab810e3bc09/webhook-server/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.898876 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/speaker/0.log" Jan 20 15:49:12 crc kubenswrapper[4949]: I0120 15:49:12.909533 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/kube-rbac-proxy/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.437229 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr_21202f95-d312-47b4-988f-4cd0a9dac08e/extract/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.445860 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr_21202f95-d312-47b4-988f-4cd0a9dac08e/util/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.455203 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9rdmr_21202f95-d312-47b4-988f-4cd0a9dac08e/pull/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.483330 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk_3f63e0ce-f0ce-434d-b9f5-b0695dba0b06/extract/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.498266 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk_3f63e0ce-f0ce-434d-b9f5-b0695dba0b06/util/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.507881 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713srrwk_3f63e0ce-f0ce-434d-b9f5-b0695dba0b06/pull/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.897381 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kmnv_a55010bf-14fe-4c92-8fe4-d2864bf74ad1/registry-server/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.902669 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kmnv_a55010bf-14fe-4c92-8fe4-d2864bf74ad1/extract-utilities/0.log" Jan 20 15:49:17 crc kubenswrapper[4949]: I0120 15:49:17.909949 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kmnv_a55010bf-14fe-4c92-8fe4-d2864bf74ad1/extract-content/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.352445 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xr695_090c2072-966d-4848-82fc-c9aecee3d6c8/registry-server/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.368821 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xr695_090c2072-966d-4848-82fc-c9aecee3d6c8/extract-utilities/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.381778 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xr695_090c2072-966d-4848-82fc-c9aecee3d6c8/extract-content/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.411684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cnrps_e8c7be5c-d5c3-44d1-91d3-c2bedf55eb74/marketplace-operator/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.549729 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-97g5q_3f68902a-0bee-45a6-96c4-b4a80feaba0b/registry-server/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.554959 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-97g5q_3f68902a-0bee-45a6-96c4-b4a80feaba0b/extract-utilities/0.log" Jan 20 15:49:18 crc kubenswrapper[4949]: I0120 15:49:18.563272 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-97g5q_3f68902a-0bee-45a6-96c4-b4a80feaba0b/extract-content/0.log" Jan 20 15:49:19 crc kubenswrapper[4949]: I0120 15:49:19.016842 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cmxfz_983905b2-cefb-487e-887f-630d669af9ec/registry-server/0.log" Jan 20 15:49:19 crc kubenswrapper[4949]: I0120 15:49:19.022128 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cmxfz_983905b2-cefb-487e-887f-630d669af9ec/extract-utilities/0.log" Jan 20 15:49:19 crc kubenswrapper[4949]: I0120 15:49:19.031482 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cmxfz_983905b2-cefb-487e-887f-630d669af9ec/extract-content/0.log" Jan 20 15:49:27 crc kubenswrapper[4949]: I0120 15:49:27.152383 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:49:27 crc kubenswrapper[4949]: I0120 15:49:27.153003 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.176499 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:34 crc kubenswrapper[4949]: E0120 15:49:34.177466 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerName="container-00" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.177478 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerName="container-00" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.177676 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8dab70-1888-4e69-a77f-47d2287883e9" containerName="container-00" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.182969 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.206349 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.338580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.338822 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.338872 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.440957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.441271 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.441431 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.441606 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.442059 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.466550 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"certified-operators-t5rhv\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:34 crc kubenswrapper[4949]: I0120 15:49:34.510062 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:35 crc kubenswrapper[4949]: I0120 15:49:35.048353 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:35 crc kubenswrapper[4949]: I0120 15:49:35.270505 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerStarted","Data":"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2"} Jan 20 15:49:35 crc kubenswrapper[4949]: I0120 15:49:35.270607 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerStarted","Data":"ebb54275831517bf6b265a69209886896dfc2a12fa90ea84e0ba4368c53095d4"} Jan 20 15:49:36 crc kubenswrapper[4949]: I0120 15:49:36.285058 4949 generic.go:334] "Generic (PLEG): container finished" podID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" exitCode=0 Jan 20 15:49:36 crc kubenswrapper[4949]: I0120 15:49:36.285245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2"} Jan 20 15:49:38 crc kubenswrapper[4949]: I0120 15:49:38.303847 4949 generic.go:334] "Generic (PLEG): container finished" podID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" exitCode=0 Jan 20 15:49:38 crc kubenswrapper[4949]: I0120 15:49:38.304393 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519"} Jan 20 15:49:39 crc kubenswrapper[4949]: I0120 15:49:39.316407 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerStarted","Data":"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314"} Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.510724 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.511588 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.570289 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:44 crc kubenswrapper[4949]: I0120 15:49:44.590271 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t5rhv" podStartSLOduration=8.149500967 podStartE2EDuration="10.590251412s" podCreationTimestamp="2026-01-20 15:49:34 +0000 UTC" firstStartedPulling="2026-01-20 15:49:36.287462668 +0000 UTC m=+3572.097293526" lastFinishedPulling="2026-01-20 15:49:38.728213113 +0000 UTC m=+3574.538043971" observedRunningTime="2026-01-20 15:49:39.331055364 +0000 UTC m=+3575.140886222" watchObservedRunningTime="2026-01-20 15:49:44.590251412 +0000 UTC m=+3580.400082260" Jan 20 15:49:45 crc kubenswrapper[4949]: I0120 15:49:45.410651 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:45 crc kubenswrapper[4949]: I0120 15:49:45.465948 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.377058 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t5rhv" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" containerID="cri-o://5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" gracePeriod=2 Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.881431 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.922281 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") pod \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.922655 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") pod \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.922733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") pod \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\" (UID: \"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52\") " Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.923947 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities" (OuterVolumeSpecName: "utilities") pod "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" (UID: "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:49:47 crc kubenswrapper[4949]: I0120 15:49:47.933729 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp" (OuterVolumeSpecName: "kube-api-access-bq5fp") pod "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" (UID: "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52"). InnerVolumeSpecName "kube-api-access-bq5fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.026463 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.026535 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq5fp\" (UniqueName: \"kubernetes.io/projected/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-kube-api-access-bq5fp\") on node \"crc\" DevicePath \"\"" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.184661 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" (UID: "9e3e5979-3c4d-439b-b6eb-b292ab9e3e52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.231981 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.387972 4949 generic.go:334] "Generic (PLEG): container finished" podID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" exitCode=0 Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388016 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5rhv" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388015 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314"} Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388161 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5rhv" event={"ID":"9e3e5979-3c4d-439b-b6eb-b292ab9e3e52","Type":"ContainerDied","Data":"ebb54275831517bf6b265a69209886896dfc2a12fa90ea84e0ba4368c53095d4"} Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.388188 4949 scope.go:117] "RemoveContainer" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.409362 4949 scope.go:117] "RemoveContainer" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.425718 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.439453 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t5rhv"] Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.441682 4949 scope.go:117] "RemoveContainer" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.484039 4949 scope.go:117] "RemoveContainer" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" Jan 20 15:49:48 crc kubenswrapper[4949]: E0120 15:49:48.487097 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314\": container with ID starting with 5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314 not found: ID does not exist" containerID="5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487155 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314"} err="failed to get container status \"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314\": rpc error: code = NotFound desc = could not find container \"5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314\": container with ID starting with 5aff2836b98d9186ec23b57da36c3b000ab73815ab4810a7e059625a21969314 not found: ID does not exist" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487184 4949 scope.go:117] "RemoveContainer" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" Jan 20 15:49:48 crc kubenswrapper[4949]: E0120 15:49:48.487422 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519\": container with ID starting with f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519 not found: ID does not exist" containerID="f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487442 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519"} err="failed to get container status \"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519\": rpc error: code = NotFound desc = could not find container \"f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519\": container with ID starting with f1399a1331289db36fb6512bbb7629a8c62aa5c725981d86f62322739ff33519 not found: ID does not exist" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487455 4949 scope.go:117] "RemoveContainer" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" Jan 20 15:49:48 crc kubenswrapper[4949]: E0120 15:49:48.487649 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2\": container with ID starting with 2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2 not found: ID does not exist" containerID="2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.487690 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2"} err="failed to get container status \"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2\": rpc error: code = NotFound desc = could not find container \"2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2\": container with ID starting with 2e4a470116be913040c822ba0b900f8ed3e298b11d693d1120bfed38ac03b0f2 not found: ID does not exist" Jan 20 15:49:48 crc kubenswrapper[4949]: I0120 15:49:48.800274 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" path="/var/lib/kubelet/pods/9e3e5979-3c4d-439b-b6eb-b292ab9e3e52/volumes" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.152678 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.153125 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.153177 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.154052 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.154097 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" gracePeriod=600 Jan 20 15:49:57 crc kubenswrapper[4949]: E0120 15:49:57.286083 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460040 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" exitCode=0 Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460082 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf"} Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460115 4949 scope.go:117] "RemoveContainer" containerID="1b579df906668286c8122c3e7598a0f47671f36cd4a0e2105880997ae62edad1" Jan 20 15:49:57 crc kubenswrapper[4949]: I0120 15:49:57.460769 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:49:57 crc kubenswrapper[4949]: E0120 15:49:57.460995 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:12 crc kubenswrapper[4949]: I0120 15:50:12.789576 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:12 crc kubenswrapper[4949]: E0120 15:50:12.790581 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:23 crc kubenswrapper[4949]: I0120 15:50:23.788847 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:23 crc kubenswrapper[4949]: E0120 15:50:23.789618 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.608231 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:28 crc kubenswrapper[4949]: E0120 15:50:28.609060 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-utilities" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609072 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-utilities" Jan 20 15:50:28 crc kubenswrapper[4949]: E0120 15:50:28.609095 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609104 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" Jan 20 15:50:28 crc kubenswrapper[4949]: E0120 15:50:28.609121 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-content" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609127 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="extract-content" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.609304 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3e5979-3c4d-439b-b6eb-b292ab9e3e52" containerName="registry-server" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.610554 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.622469 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.776539 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.776595 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.776650 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.878518 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.878574 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.878609 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.879231 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.879317 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.898947 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"redhat-operators-5btxz\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:28 crc kubenswrapper[4949]: I0120 15:50:28.931260 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.426812 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.713798 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd358256-8547-497c-b550-c67a395e34a5" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" exitCode=0 Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.713863 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c"} Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.714066 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerStarted","Data":"9da8f2722c3a4ae2a71f48976d74b6f30505d129cc5c03322698b370190492b5"} Jan 20 15:50:29 crc kubenswrapper[4949]: I0120 15:50:29.716330 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:50:31 crc kubenswrapper[4949]: I0120 15:50:31.731419 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerStarted","Data":"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80"} Jan 20 15:50:32 crc kubenswrapper[4949]: I0120 15:50:32.740248 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd358256-8547-497c-b550-c67a395e34a5" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" exitCode=0 Jan 20 15:50:32 crc kubenswrapper[4949]: I0120 15:50:32.740291 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80"} Jan 20 15:50:33 crc kubenswrapper[4949]: I0120 15:50:33.761683 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerStarted","Data":"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79"} Jan 20 15:50:33 crc kubenswrapper[4949]: I0120 15:50:33.802810 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5btxz" podStartSLOduration=2.269922445 podStartE2EDuration="5.802793176s" podCreationTimestamp="2026-01-20 15:50:28 +0000 UTC" firstStartedPulling="2026-01-20 15:50:29.715980638 +0000 UTC m=+3625.525811496" lastFinishedPulling="2026-01-20 15:50:33.248851369 +0000 UTC m=+3629.058682227" observedRunningTime="2026-01-20 15:50:33.792642667 +0000 UTC m=+3629.602473515" watchObservedRunningTime="2026-01-20 15:50:33.802793176 +0000 UTC m=+3629.612624024" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.496461 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/controller/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.506651 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-n6txw_b76ab7ec-a978-4aea-bc88-b2a82bc54e14/kube-rbac-proxy/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.524387 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/controller/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.628421 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-k9xq5_1ca44809-a121-411d-8be6-f1a8b879b97f/cert-manager-controller/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.646412 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9x9js_9cd775b9-2d07-40bb-964c-6e935aa6775a/cert-manager-cainjector/0.log" Jan 20 15:50:34 crc kubenswrapper[4949]: I0120 15:50:34.657427 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wdg2b_512fc928-abb3-4353-9543-be5d35cd8ccd/cert-manager-webhook/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.774944 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.786374 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/reloader/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.789442 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:35 crc kubenswrapper[4949]: E0120 15:50:35.790875 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.791720 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/frr-metrics/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.799692 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.808036 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/kube-rbac-proxy-frr/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.820672 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-frr-files/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.828904 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-reloader/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.836578 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hg78r_a4a159b5-92c1-4221-9b9d-ef46eda1afca/cp-metrics/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.845998 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-87tfc_9787b339-5a35-4568-8ea4-12b8904efd8a/frr-k8s-webhook-server/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.871809 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7949cdb884-qwqpl_aab28d03-013d-4f55-8f5d-4452aa51ae0b/manager/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.889475 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-598fc6787c-lklkm_418359eb-1dea-4f02-9964-9ab810e3bc09/webhook-server/0.log" Jan 20 15:50:35 crc kubenswrapper[4949]: I0120 15:50:35.985839 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-jzl6b_070f7ba5-a528-4316-8484-4ea82fb70a40/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.078922 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-vll8p_c44d3483-738b-4aab-a4a2-1478480b6330/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.088247 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/extract/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.096954 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/util/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.103582 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/pull/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.131301 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-vhsdx_070a47eb-d68f-4208-86eb-a99f0a9ce5df/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.280987 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-m9grk_5eae4c51-3e86-4153-8c26-d4c51b2f1331/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.297510 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-jxnlk_e60d05a5-d1d5-4959-843b-654aaf547bca/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.354437 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5vwt4_05642ba7-89bd-4d72-a31b-4e6d4532923e/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.382684 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/speaker/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.393693 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-znbk6_e00f603c-93d1-4941-908a-26fdf24da7b7/kube-rbac-proxy/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.604548 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-q5h89_c07420af-b163-4ab6-8a1c-5e697629cab0/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.620665 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-bt9wn_57182814-f19c-4247-b774-5b01afe7d680/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.686535 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-th6cb_d6706563-2c93-414e-bb49-cd74ae82d235/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.728152 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-ft9st_2dacfd0a-8e74-4eb1-b4cb-892ae16a9291/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.761599 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tj7jv_a87686a4-1af3-4d05-ac2d-15551c80e0d7/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.804750 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ljxrw_017942ba-9ec1-4474-91e5-7adb1481e807/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.872673 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-cc9zv_728be0e4-4dde-4f00-be4f-af6590d7025b/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.887742 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-g87xm_d02df557-c289-4444-b29b-917ea271a874/manager/0.log" Jan 20 15:50:36 crc kubenswrapper[4949]: I0120 15:50:36.902400 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg_0e576db6-d246-4a03-a2bd-8cbd7f7526fd/manager/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.032595 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-647bfc4c5c-8vnrj_fa13f464-1245-4c7e-ba74-47e65076c9d1/operator/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.045536 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.060000 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.078873 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-176a-account-create-update-gqg2s"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.100108 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-6d468"] Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.457800 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-k9xq5_1ca44809-a121-411d-8be6-f1a8b879b97f/cert-manager-controller/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.511605 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9x9js_9cd775b9-2d07-40bb-964c-6e935aa6775a/cert-manager-cainjector/0.log" Jan 20 15:50:37 crc kubenswrapper[4949]: I0120 15:50:37.525175 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wdg2b_512fc928-abb3-4353-9543-be5d35cd8ccd/cert-manager-webhook/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.248298 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d5t2m_95c38c39-62f0-4343-9628-5070d8cc10b7/control-plane-machine-set-operator/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.260743 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/kube-rbac-proxy/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.269152 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tsmsl_7f69495e-a17d-4493-b598-99c2fc9afee7/machine-api-operator/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.393670 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-559d8b8b56-srtdv_ec1b1a5b-0d86-40b4-9410-397d183776d0/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.406831 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nf5l6_a06c3c7b-913e-412e-833e-fcd7df154877/registry-server/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.454615 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-f52ph_ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.480271 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-4kwz9_58fdba15-e8ba-47fa-aca8-90f638577a6b/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.498547 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pzpkv_d770793b-0e56-43cc-9707-5d062b8f7c82/operator/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.511164 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-nr2lr_db4c21b1-de25-4c17-a3c3-e6eea4044d77/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.584721 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-94wzp_dc5c569e-c0ee-44bc-bdc9-397ab5941ad5/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.596591 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-869947677f-8qg9p_63acb80f-21b4-4255-af60-03a68dd07658/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.608180 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-jc5mh_68de7d27-2202-473a-b077-d03d033244a2/manager/0.log" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.798317 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d" path="/var/lib/kubelet/pods/92d9427f-dcc6-4ab3-bab9-d537ef1a1d4d/volumes" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.798910 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f501b4-e612-41a4-aef2-fdaf166aa018" path="/var/lib/kubelet/pods/c1f501b4-e612-41a4-aef2-fdaf166aa018/volumes" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.932218 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:38 crc kubenswrapper[4949]: I0120 15:50:38.932269 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.362295 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-jzl6b_070f7ba5-a528-4316-8484-4ea82fb70a40/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.420623 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-vll8p_c44d3483-738b-4aab-a4a2-1478480b6330/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.429064 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/extract/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.435363 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/util/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.443140 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dbc5247fe09b4063ba3f6918832310ddbf4fa607e6df715e75c5958af7ww2zt_beb9083f-e7f3-412d-9322-122ad5dcaaf6/pull/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.454285 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-vhsdx_070a47eb-d68f-4208-86eb-a99f0a9ce5df/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.541852 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-m9grk_5eae4c51-3e86-4153-8c26-d4c51b2f1331/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.558050 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-jxnlk_e60d05a5-d1d5-4959-843b-654aaf547bca/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.588482 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5vwt4_05642ba7-89bd-4d72-a31b-4e6d4532923e/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.931967 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-q5h89_c07420af-b163-4ab6-8a1c-5e697629cab0/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.944338 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-bt9wn_57182814-f19c-4247-b774-5b01afe7d680/manager/0.log" Jan 20 15:50:39 crc kubenswrapper[4949]: I0120 15:50:39.983554 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5btxz" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" probeResult="failure" output=< Jan 20 15:50:39 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Jan 20 15:50:39 crc kubenswrapper[4949]: > Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.012768 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-th6cb_d6706563-2c93-414e-bb49-cd74ae82d235/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.055397 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-ft9st_2dacfd0a-8e74-4eb1-b4cb-892ae16a9291/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.072241 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-vt2ng_7a366383-883e-4f7e-b656-d23eb0fe6294/nmstate-console-plugin/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.087327 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ndwpd_248f6a09-0064-4d9f-a4d7-13a92b06ee72/nmstate-handler/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.088856 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-tj7jv_a87686a4-1af3-4d05-ac2d-15551c80e0d7/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.109640 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/nmstate-metrics/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.119423 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-bz62x_696be671-724b-4447-ba02-730dd10fc489/kube-rbac-proxy/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.130060 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ljxrw_017942ba-9ec1-4474-91e5-7adb1481e807/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.132692 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jsrwb_b2bfb1bf-1717-4d51-9632-204856f869f4/nmstate-operator/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.148629 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-twsz5_71837cd3-c24a-4d86-b59f-28330f7d2809/nmstate-webhook/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.208581 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-cc9zv_728be0e4-4dde-4f00-be4f-af6590d7025b/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.218525 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-g87xm_d02df557-c289-4444-b29b-917ea271a874/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.233929 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qmsmg_0e576db6-d246-4a03-a2bd-8cbd7f7526fd/manager/0.log" Jan 20 15:50:40 crc kubenswrapper[4949]: I0120 15:50:40.359621 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-647bfc4c5c-8vnrj_fa13f464-1245-4c7e-ba74-47e65076c9d1/operator/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.623215 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-559d8b8b56-srtdv_ec1b1a5b-0d86-40b4-9410-397d183776d0/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.635146 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nf5l6_a06c3c7b-913e-412e-833e-fcd7df154877/registry-server/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.669404 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-f52ph_ecd3ebb3-c8d8-4f9b-8402-626bfb6a215e/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.713877 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-4kwz9_58fdba15-e8ba-47fa-aca8-90f638577a6b/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.737021 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-pzpkv_d770793b-0e56-43cc-9707-5d062b8f7c82/operator/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.747732 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-nr2lr_db4c21b1-de25-4c17-a3c3-e6eea4044d77/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.831119 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-94wzp_dc5c569e-c0ee-44bc-bdc9-397ab5941ad5/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.841616 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-869947677f-8qg9p_63acb80f-21b4-4255-af60-03a68dd07658/manager/0.log" Jan 20 15:50:41 crc kubenswrapper[4949]: I0120 15:50:41.852094 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-jc5mh_68de7d27-2202-473a-b077-d03d033244a2/manager/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.208317 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/2.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.301882 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2szcd_3ac16078-f295-4f4b-875c-a8505e87b9da/kube-multus/3.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.315366 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/kube-multus-additional-cni-plugins/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.325563 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/egress-router-binary-copy/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.333628 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/cni-plugins/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.340434 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/bond-cni-plugin/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.348946 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/routeoverride-cni/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.357675 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/whereabouts-cni-bincopy/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.365935 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-sqr5x_da08b8e6-19e1-41fa-8e71-2988f3effb27/whereabouts-cni/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.406751 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-zvfr4_c47ecb6d-9ecf-480f-b605-4dd91e900521/multus-admission-controller/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.411907 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-zvfr4_c47ecb6d-9ecf-480f-b605-4dd91e900521/kube-rbac-proxy/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.445544 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-hlfls_fa4eae9d-b492-4fd3-8baf-38ed726d9e4c/network-metrics-daemon/0.log" Jan 20 15:50:44 crc kubenswrapper[4949]: I0120 15:50:44.450853 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-hlfls_fa4eae9d-b492-4fd3-8baf-38ed726d9e4c/kube-rbac-proxy/0.log" Jan 20 15:50:46 crc kubenswrapper[4949]: I0120 15:50:46.793775 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:46 crc kubenswrapper[4949]: E0120 15:50:46.794589 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:49 crc kubenswrapper[4949]: I0120 15:50:48.999226 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:49 crc kubenswrapper[4949]: I0120 15:50:49.070257 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:49 crc kubenswrapper[4949]: I0120 15:50:49.247798 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:50 crc kubenswrapper[4949]: I0120 15:50:50.918530 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5btxz" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" containerID="cri-o://5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" gracePeriod=2 Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.406569 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.426355 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") pod \"cd358256-8547-497c-b550-c67a395e34a5\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.426513 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") pod \"cd358256-8547-497c-b550-c67a395e34a5\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.429552 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities" (OuterVolumeSpecName: "utilities") pod "cd358256-8547-497c-b550-c67a395e34a5" (UID: "cd358256-8547-497c-b550-c67a395e34a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.456866 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t" (OuterVolumeSpecName: "kube-api-access-z969t") pod "cd358256-8547-497c-b550-c67a395e34a5" (UID: "cd358256-8547-497c-b550-c67a395e34a5"). InnerVolumeSpecName "kube-api-access-z969t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.530155 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") pod \"cd358256-8547-497c-b550-c67a395e34a5\" (UID: \"cd358256-8547-497c-b550-c67a395e34a5\") " Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.531230 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.531263 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z969t\" (UniqueName: \"kubernetes.io/projected/cd358256-8547-497c-b550-c67a395e34a5-kube-api-access-z969t\") on node \"crc\" DevicePath \"\"" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.692827 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd358256-8547-497c-b550-c67a395e34a5" (UID: "cd358256-8547-497c-b550-c67a395e34a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.780073 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd358256-8547-497c-b550-c67a395e34a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.930923 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd358256-8547-497c-b550-c67a395e34a5" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" exitCode=0 Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.930984 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5btxz" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.930986 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79"} Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.931040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5btxz" event={"ID":"cd358256-8547-497c-b550-c67a395e34a5","Type":"ContainerDied","Data":"9da8f2722c3a4ae2a71f48976d74b6f30505d129cc5c03322698b370190492b5"} Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.931068 4949 scope.go:117] "RemoveContainer" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.975173 4949 scope.go:117] "RemoveContainer" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.982581 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:51 crc kubenswrapper[4949]: I0120 15:50:51.993354 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5btxz"] Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.014019 4949 scope.go:117] "RemoveContainer" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.074330 4949 scope.go:117] "RemoveContainer" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" Jan 20 15:50:52 crc kubenswrapper[4949]: E0120 15:50:52.075000 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79\": container with ID starting with 5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79 not found: ID does not exist" containerID="5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075037 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79"} err="failed to get container status \"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79\": rpc error: code = NotFound desc = could not find container \"5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79\": container with ID starting with 5966dde6b229b9f2484efbb33041346511fc24ecc750a3af842142c0fd133f79 not found: ID does not exist" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075064 4949 scope.go:117] "RemoveContainer" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" Jan 20 15:50:52 crc kubenswrapper[4949]: E0120 15:50:52.075421 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80\": container with ID starting with 6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80 not found: ID does not exist" containerID="6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075672 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80"} err="failed to get container status \"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80\": rpc error: code = NotFound desc = could not find container \"6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80\": container with ID starting with 6aa9b79a9cbc89fbbea0592a2bd238676d81b127c9160c45ff0c2e58ea7f5f80 not found: ID does not exist" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.075697 4949 scope.go:117] "RemoveContainer" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" Jan 20 15:50:52 crc kubenswrapper[4949]: E0120 15:50:52.076019 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c\": container with ID starting with fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c not found: ID does not exist" containerID="fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.076044 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c"} err="failed to get container status \"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c\": rpc error: code = NotFound desc = could not find container \"fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c\": container with ID starting with fe1357fab01b82e63ba52b1f5107703e54c664503cd63269d27181941e98a46c not found: ID does not exist" Jan 20 15:50:52 crc kubenswrapper[4949]: I0120 15:50:52.810195 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd358256-8547-497c-b550-c67a395e34a5" path="/var/lib/kubelet/pods/cd358256-8547-497c-b550-c67a395e34a5/volumes" Jan 20 15:50:57 crc kubenswrapper[4949]: I0120 15:50:57.027951 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:50:57 crc kubenswrapper[4949]: I0120 15:50:57.035729 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-q7rxq"] Jan 20 15:50:57 crc kubenswrapper[4949]: I0120 15:50:57.788660 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:50:57 crc kubenswrapper[4949]: E0120 15:50:57.789017 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:50:58 crc kubenswrapper[4949]: I0120 15:50:58.809829 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1501061b-c734-43b8-8f88-0d895789e209" path="/var/lib/kubelet/pods/1501061b-c734-43b8-8f88-0d895789e209/volumes" Jan 20 15:51:08 crc kubenswrapper[4949]: I0120 15:51:08.789160 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:08 crc kubenswrapper[4949]: E0120 15:51:08.790127 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:51:19 crc kubenswrapper[4949]: I0120 15:51:19.136732 4949 scope.go:117] "RemoveContainer" containerID="d1185fba5ac50e378c845b742be91f772d83fecbf8e284d8d6c3788d93e191be" Jan 20 15:51:19 crc kubenswrapper[4949]: I0120 15:51:19.187398 4949 scope.go:117] "RemoveContainer" containerID="7534ab81bdd16531f6d8d067d32c880d228b71ea42d0b7232ec112812a44a89c" Jan 20 15:51:19 crc kubenswrapper[4949]: I0120 15:51:19.248189 4949 scope.go:117] "RemoveContainer" containerID="e624f35bb39ec45aadfacd65516b3a22eeef144c564f72b34f72f2c1e14f8fe5" Jan 20 15:51:20 crc kubenswrapper[4949]: I0120 15:51:20.789580 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:20 crc kubenswrapper[4949]: E0120 15:51:20.790210 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:51:35 crc kubenswrapper[4949]: I0120 15:51:35.789953 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:35 crc kubenswrapper[4949]: E0120 15:51:35.791136 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:51:49 crc kubenswrapper[4949]: I0120 15:51:49.789861 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:51:49 crc kubenswrapper[4949]: E0120 15:51:49.790737 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:04 crc kubenswrapper[4949]: I0120 15:52:04.804461 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:04 crc kubenswrapper[4949]: E0120 15:52:04.805484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:15 crc kubenswrapper[4949]: I0120 15:52:15.789883 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:15 crc kubenswrapper[4949]: E0120 15:52:15.790757 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:30 crc kubenswrapper[4949]: I0120 15:52:30.791241 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:30 crc kubenswrapper[4949]: E0120 15:52:30.792229 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:43 crc kubenswrapper[4949]: I0120 15:52:43.789923 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:43 crc kubenswrapper[4949]: E0120 15:52:43.790968 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:52:58 crc kubenswrapper[4949]: I0120 15:52:58.790645 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:52:58 crc kubenswrapper[4949]: E0120 15:52:58.791485 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:09 crc kubenswrapper[4949]: I0120 15:53:09.789188 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:09 crc kubenswrapper[4949]: E0120 15:53:09.790177 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:23 crc kubenswrapper[4949]: I0120 15:53:23.790371 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:23 crc kubenswrapper[4949]: E0120 15:53:23.791881 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:37 crc kubenswrapper[4949]: I0120 15:53:37.791654 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:37 crc kubenswrapper[4949]: E0120 15:53:37.792754 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:53:51 crc kubenswrapper[4949]: I0120 15:53:51.789701 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:53:51 crc kubenswrapper[4949]: E0120 15:53:51.791231 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:02 crc kubenswrapper[4949]: I0120 15:54:02.789469 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:02 crc kubenswrapper[4949]: E0120 15:54:02.790288 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:14 crc kubenswrapper[4949]: I0120 15:54:14.801319 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:14 crc kubenswrapper[4949]: E0120 15:54:14.802215 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:27 crc kubenswrapper[4949]: I0120 15:54:27.789160 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:27 crc kubenswrapper[4949]: E0120 15:54:27.791109 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:38 crc kubenswrapper[4949]: I0120 15:54:38.807656 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:38 crc kubenswrapper[4949]: E0120 15:54:38.808918 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:54:51 crc kubenswrapper[4949]: I0120 15:54:51.789332 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:54:51 crc kubenswrapper[4949]: E0120 15:54:51.790011 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kgqjd_openshift-machine-config-operator(2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e)\"" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" Jan 20 15:55:05 crc kubenswrapper[4949]: I0120 15:55:05.789430 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 15:55:06 crc kubenswrapper[4949]: I0120 15:55:06.605739 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532"} Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.759599 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:44 crc kubenswrapper[4949]: E0120 15:55:44.761802 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-content" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.761888 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-content" Jan 20 15:55:44 crc kubenswrapper[4949]: E0120 15:55:44.761949 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-utilities" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.762009 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="extract-utilities" Jan 20 15:55:44 crc kubenswrapper[4949]: E0120 15:55:44.762075 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.762126 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.762358 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd358256-8547-497c-b550-c67a395e34a5" containerName="registry-server" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.763944 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.787499 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.940331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.940799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:44 crc kubenswrapper[4949]: I0120 15:55:44.941047 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.042875 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043090 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043160 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043445 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.043464 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.073154 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"redhat-marketplace-m9hq6\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.087693 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:45 crc kubenswrapper[4949]: I0120 15:55:45.609317 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.000874 4949 generic.go:334] "Generic (PLEG): container finished" podID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" exitCode=0 Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.000953 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7"} Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.001233 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerStarted","Data":"1f99c44e4ba47114f5fcc23dea1a7e5e90cc07eb1053004959a4b030db8bea3f"} Jan 20 15:55:46 crc kubenswrapper[4949]: I0120 15:55:46.004782 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 15:55:47 crc kubenswrapper[4949]: I0120 15:55:47.025144 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerStarted","Data":"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34"} Jan 20 15:55:48 crc kubenswrapper[4949]: I0120 15:55:48.038128 4949 generic.go:334] "Generic (PLEG): container finished" podID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" exitCode=0 Jan 20 15:55:48 crc kubenswrapper[4949]: I0120 15:55:48.038431 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34"} Jan 20 15:55:49 crc kubenswrapper[4949]: I0120 15:55:49.054319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerStarted","Data":"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f"} Jan 20 15:55:49 crc kubenswrapper[4949]: I0120 15:55:49.082944 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9hq6" podStartSLOduration=2.574477431 podStartE2EDuration="5.082918576s" podCreationTimestamp="2026-01-20 15:55:44 +0000 UTC" firstStartedPulling="2026-01-20 15:55:46.004422596 +0000 UTC m=+3941.814253464" lastFinishedPulling="2026-01-20 15:55:48.512863711 +0000 UTC m=+3944.322694609" observedRunningTime="2026-01-20 15:55:49.075454902 +0000 UTC m=+3944.885285780" watchObservedRunningTime="2026-01-20 15:55:49.082918576 +0000 UTC m=+3944.892749434" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.088488 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.089166 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.146720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.213618 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:55 crc kubenswrapper[4949]: I0120 15:55:55.385814 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:57 crc kubenswrapper[4949]: I0120 15:55:57.138455 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9hq6" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" containerID="cri-o://f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" gracePeriod=2 Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.131611 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155733 4949 generic.go:334] "Generic (PLEG): container finished" podID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" exitCode=0 Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f"} Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155813 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9hq6" event={"ID":"ca270d2a-2bc4-49ee-ac79-58d0206557c1","Type":"ContainerDied","Data":"1f99c44e4ba47114f5fcc23dea1a7e5e90cc07eb1053004959a4b030db8bea3f"} Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155832 4949 scope.go:117] "RemoveContainer" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.155986 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9hq6" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.187961 4949 scope.go:117] "RemoveContainer" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.207410 4949 scope.go:117] "RemoveContainer" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.219708 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") pod \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.219897 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") pod \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.220033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") pod \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\" (UID: \"ca270d2a-2bc4-49ee-ac79-58d0206557c1\") " Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.220826 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities" (OuterVolumeSpecName: "utilities") pod "ca270d2a-2bc4-49ee-ac79-58d0206557c1" (UID: "ca270d2a-2bc4-49ee-ac79-58d0206557c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.226811 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r" (OuterVolumeSpecName: "kube-api-access-xsl4r") pod "ca270d2a-2bc4-49ee-ac79-58d0206557c1" (UID: "ca270d2a-2bc4-49ee-ac79-58d0206557c1"). InnerVolumeSpecName "kube-api-access-xsl4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.245439 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca270d2a-2bc4-49ee-ac79-58d0206557c1" (UID: "ca270d2a-2bc4-49ee-ac79-58d0206557c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.302015 4949 scope.go:117] "RemoveContainer" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" Jan 20 15:55:58 crc kubenswrapper[4949]: E0120 15:55:58.302581 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f\": container with ID starting with f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f not found: ID does not exist" containerID="f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.302693 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f"} err="failed to get container status \"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f\": rpc error: code = NotFound desc = could not find container \"f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f\": container with ID starting with f013e9d37261aad0203126fa8c3328f8a313fe26a0691f4b5653392d01c0433f not found: ID does not exist" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.302748 4949 scope.go:117] "RemoveContainer" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" Jan 20 15:55:58 crc kubenswrapper[4949]: E0120 15:55:58.303402 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34\": container with ID starting with d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34 not found: ID does not exist" containerID="d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.303432 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34"} err="failed to get container status \"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34\": rpc error: code = NotFound desc = could not find container \"d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34\": container with ID starting with d06abee2a78251e4c531ea66be96bb06b2a51d8db6a69157a41b74a61dc00c34 not found: ID does not exist" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.303455 4949 scope.go:117] "RemoveContainer" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" Jan 20 15:55:58 crc kubenswrapper[4949]: E0120 15:55:58.303832 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7\": container with ID starting with bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7 not found: ID does not exist" containerID="bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.303852 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7"} err="failed to get container status \"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7\": rpc error: code = NotFound desc = could not find container \"bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7\": container with ID starting with bcd250a0983bbd09859216c5e0b4e0f498bd4272039841d3d6d2e913716e09a7 not found: ID does not exist" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.322616 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsl4r\" (UniqueName: \"kubernetes.io/projected/ca270d2a-2bc4-49ee-ac79-58d0206557c1-kube-api-access-xsl4r\") on node \"crc\" DevicePath \"\"" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.322668 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.322687 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca270d2a-2bc4-49ee-ac79-58d0206557c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.499291 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.512286 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9hq6"] Jan 20 15:55:58 crc kubenswrapper[4949]: I0120 15:55:58.801569 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" path="/var/lib/kubelet/pods/ca270d2a-2bc4-49ee-ac79-58d0206557c1/volumes" Jan 20 15:57:27 crc kubenswrapper[4949]: I0120 15:57:27.164046 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:57:27 crc kubenswrapper[4949]: I0120 15:57:27.164694 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.173827 4949 generic.go:334] "Generic (PLEG): container finished" podID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" exitCode=0 Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.173883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kk8nn/must-gather-ccspq" event={"ID":"3cf0a23e-747e-442b-b15a-d9db29607be8","Type":"ContainerDied","Data":"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f"} Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.175832 4949 scope.go:117] "RemoveContainer" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:32 crc kubenswrapper[4949]: I0120 15:57:32.708852 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kk8nn_must-gather-ccspq_3cf0a23e-747e-442b-b15a-d9db29607be8/gather/0.log" Jan 20 15:57:34 crc kubenswrapper[4949]: E0120 15:57:34.445427 4949 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.41:53902->38.102.83.41:36705: write tcp 38.102.83.41:53902->38.102.83.41:36705: write: connection reset by peer Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.533317 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.534426 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kk8nn/must-gather-ccspq" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" containerID="cri-o://e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" gracePeriod=2 Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.545798 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kk8nn/must-gather-ccspq"] Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.994879 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kk8nn_must-gather-ccspq_3cf0a23e-747e-442b-b15a-d9db29607be8/copy/0.log" Jan 20 15:57:40 crc kubenswrapper[4949]: I0120 15:57:40.995461 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.132888 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") pod \"3cf0a23e-747e-442b-b15a-d9db29607be8\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.133113 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") pod \"3cf0a23e-747e-442b-b15a-d9db29607be8\" (UID: \"3cf0a23e-747e-442b-b15a-d9db29607be8\") " Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.140446 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn" (OuterVolumeSpecName: "kube-api-access-5mjpn") pod "3cf0a23e-747e-442b-b15a-d9db29607be8" (UID: "3cf0a23e-747e-442b-b15a-d9db29607be8"). InnerVolumeSpecName "kube-api-access-5mjpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.235588 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mjpn\" (UniqueName: \"kubernetes.io/projected/3cf0a23e-747e-442b-b15a-d9db29607be8-kube-api-access-5mjpn\") on node \"crc\" DevicePath \"\"" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.277962 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kk8nn_must-gather-ccspq_3cf0a23e-747e-442b-b15a-d9db29607be8/copy/0.log" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.278692 4949 generic.go:334] "Generic (PLEG): container finished" podID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" exitCode=143 Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.278747 4949 scope.go:117] "RemoveContainer" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.278749 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kk8nn/must-gather-ccspq" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.301835 4949 scope.go:117] "RemoveContainer" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.331860 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3cf0a23e-747e-442b-b15a-d9db29607be8" (UID: "3cf0a23e-747e-442b-b15a-d9db29607be8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.337220 4949 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3cf0a23e-747e-442b-b15a-d9db29607be8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.340360 4949 scope.go:117] "RemoveContainer" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" Jan 20 15:57:41 crc kubenswrapper[4949]: E0120 15:57:41.340951 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f\": container with ID starting with e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f not found: ID does not exist" containerID="e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.340994 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f"} err="failed to get container status \"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f\": rpc error: code = NotFound desc = could not find container \"e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f\": container with ID starting with e89ba97754c1ae6f8227b283b006aa7e890a616402782da28a4e4d12cc7e9a7f not found: ID does not exist" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.341136 4949 scope.go:117] "RemoveContainer" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:41 crc kubenswrapper[4949]: E0120 15:57:41.341489 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f\": container with ID starting with c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f not found: ID does not exist" containerID="c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f" Jan 20 15:57:41 crc kubenswrapper[4949]: I0120 15:57:41.341595 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f"} err="failed to get container status \"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f\": rpc error: code = NotFound desc = could not find container \"c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f\": container with ID starting with c787e736bf44f70db7bb9f26d138bef75a309f90327ed12aa645833aa6feba0f not found: ID does not exist" Jan 20 15:57:42 crc kubenswrapper[4949]: I0120 15:57:42.799904 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" path="/var/lib/kubelet/pods/3cf0a23e-747e-442b-b15a-d9db29607be8/volumes" Jan 20 15:57:57 crc kubenswrapper[4949]: I0120 15:57:57.152099 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:57:57 crc kubenswrapper[4949]: I0120 15:57:57.152761 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.153092 4949 patch_prober.go:28] interesting pod/machine-config-daemon-kgqjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.153901 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.153968 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.154842 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532"} pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.154936 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" podUID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerName="machine-config-daemon" containerID="cri-o://738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532" gracePeriod=600 Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742115 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e" containerID="738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532" exitCode=0 Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742199 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerDied","Data":"738517fd213ca0978010b7850835098e7a0942205a60b045f8e99dd644db7532"} Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742900 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kgqjd" event={"ID":"2c9c7916-1f51-47f7-abe3-2ec9cd2a1f5e","Type":"ContainerStarted","Data":"bc3fef792b2aaf3deb6e4efaa740bd3b894f193b180d2c2f9cdb8c064d84fc34"} Jan 20 15:58:27 crc kubenswrapper[4949]: I0120 15:58:27.742966 4949 scope.go:117] "RemoveContainer" containerID="e69584ca2074550abc69a4857c5f80025a0a1867241f1bd0c404bbb05530bdcf" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.184703 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5"] Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.186912 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="gather" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187022 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="gather" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187112 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-content" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187188 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-content" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187292 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-utilities" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187369 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="extract-utilities" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187458 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187552 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" Jan 20 16:00:00 crc kubenswrapper[4949]: E0120 16:00:00.187650 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.187727 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.188008 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="copy" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.188098 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf0a23e-747e-442b-b15a-d9db29607be8" containerName="gather" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.188198 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca270d2a-2bc4-49ee-ac79-58d0206557c1" containerName="registry-server" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.189061 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.191822 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.192346 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.194033 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5"] Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.218489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.218613 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.219495 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.321865 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.321945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.322077 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.324339 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.334697 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.408780 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"collect-profiles-29482080-t7lj5\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.519611 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:00 crc kubenswrapper[4949]: I0120 16:00:00.965046 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5"] Jan 20 16:00:00 crc kubenswrapper[4949]: W0120 16:00:00.972149 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod598c0ad9_75c0_46a0_9489_ac71a51debee.slice/crio-90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a WatchSource:0}: Error finding container 90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a: Status 404 returned error can't find the container with id 90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a Jan 20 16:00:01 crc kubenswrapper[4949]: I0120 16:00:01.727833 4949 generic.go:334] "Generic (PLEG): container finished" podID="598c0ad9-75c0-46a0-9489-ac71a51debee" containerID="359ca9b84737be2580278b549798d59dd6f1a73becc824d866173184a0cdd102" exitCode=0 Jan 20 16:00:01 crc kubenswrapper[4949]: I0120 16:00:01.727899 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" event={"ID":"598c0ad9-75c0-46a0-9489-ac71a51debee","Type":"ContainerDied","Data":"359ca9b84737be2580278b549798d59dd6f1a73becc824d866173184a0cdd102"} Jan 20 16:00:01 crc kubenswrapper[4949]: I0120 16:00:01.728176 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" event={"ID":"598c0ad9-75c0-46a0-9489-ac71a51debee","Type":"ContainerStarted","Data":"90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a"} Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.101705 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.177287 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") pod \"598c0ad9-75c0-46a0-9489-ac71a51debee\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.177580 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") pod \"598c0ad9-75c0-46a0-9489-ac71a51debee\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.177728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") pod \"598c0ad9-75c0-46a0-9489-ac71a51debee\" (UID: \"598c0ad9-75c0-46a0-9489-ac71a51debee\") " Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.178732 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume" (OuterVolumeSpecName: "config-volume") pod "598c0ad9-75c0-46a0-9489-ac71a51debee" (UID: "598c0ad9-75c0-46a0-9489-ac71a51debee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.184950 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "598c0ad9-75c0-46a0-9489-ac71a51debee" (UID: "598c0ad9-75c0-46a0-9489-ac71a51debee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.185080 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724" (OuterVolumeSpecName: "kube-api-access-cp724") pod "598c0ad9-75c0-46a0-9489-ac71a51debee" (UID: "598c0ad9-75c0-46a0-9489-ac71a51debee"). InnerVolumeSpecName "kube-api-access-cp724". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.280880 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp724\" (UniqueName: \"kubernetes.io/projected/598c0ad9-75c0-46a0-9489-ac71a51debee-kube-api-access-cp724\") on node \"crc\" DevicePath \"\"" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.281165 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598c0ad9-75c0-46a0-9489-ac71a51debee-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.281246 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/598c0ad9-75c0-46a0-9489-ac71a51debee-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.747973 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" event={"ID":"598c0ad9-75c0-46a0-9489-ac71a51debee","Type":"ContainerDied","Data":"90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a"} Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.748026 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90eb4c3077690ef68e194e0d0a3f6fcfc1832fef65d1913de6ce6c94390b991a" Jan 20 16:00:03 crc kubenswrapper[4949]: I0120 16:00:03.748077 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482080-t7lj5" Jan 20 16:00:04 crc kubenswrapper[4949]: I0120 16:00:04.188610 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 16:00:04 crc kubenswrapper[4949]: I0120 16:00:04.198931 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482035-p8tzb"] Jan 20 16:00:04 crc kubenswrapper[4949]: I0120 16:00:04.801075 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574a1f73-b7b1-4ff1-9621-3c13ad507d66" path="/var/lib/kubelet/pods/574a1f73-b7b1-4ff1-9621-3c13ad507d66/volumes" Jan 20 16:00:19 crc kubenswrapper[4949]: I0120 16:00:19.644258 4949 scope.go:117] "RemoveContainer" containerID="1e5c2f6206c81a356513a5962ceabe287f73be62df3cd8a2f36dfc56324aef5b"